Reverse-engineered from Claude Code CLI v2.1.34. This document provides a complete blueprint for implementing a multi-agent teammate coordination system in another code agent.
| name | orchestrating-swarms |
|---|---|
| description | Master multi-agent orchestration using Claude Code's TeammateTool and Task system. Use when coordinating multiple agents, running parallel code reviews, creating pipeline workflows with dependencies, building self-organizing task queues, or any task benefiting from divide-and-conquer patterns. |
Master multi-agent orchestration using Claude Code's TeammateTool and Task system.
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
- Clone llama.cpp from git, I am on commit
08737ef720f0510c7ec2aa84d7f70c691073c35d.
Maybe you've heard about this technique but you haven't completely understood it, especially the PPO part. This explanation might help.
We will focus on text-to-text language models 📝, such as GPT-3, BLOOM, and T5. Models like BERT, which are encoder-only, are not addressed.
Reinforcement Learning from Human Feedback (RLHF) has been successfully applied in ChatGPT, hence its major increase in popularity. 📈
RLHF is especially useful in two scenarios 🌟:
- You can’t create a good loss function
- Example: how do you calculate a metric to measure if the model’s output was funny?
- You want to train with production data, but you can’t easily label your production data
| def convert_acc(acc): | |
| """Convert raw accelerometer measurements in roll pitch yaw | |
| https://stackoverflow.com/questions/3755059/3d-accelerometer-calculate-the-orientation | |
| The yaw equation comes from here https://robotics.stackexchange.com/questions/14305/yaw-from-accelerometer-no-so-what-do-these-equations-actually-mean | |
| If you have a magnetometer check out https://habr.com/en/post/499190/ | |
| Args: | |
| acc (np.array): Array containing the three raw measurements on the accelerometer along | |
| x, y, z | |
| Returns: |
| import 'dart:async'; | |
| import 'dart:convert'; | |
| import 'dart:io'; | |
| import 'package:meta/meta.dart'; | |
| import 'package:mime/mime.dart'; | |
| import 'package:path/path.dart'; | |
| import 'package:http/http.dart' as http; | |
| abstract class ApiClient { |
| import boto3 | |
| from boto3.session import Session | |
| def assume_role(arn, session_name): | |
| """aws sts assume-role --role-arn arn:aws:iam::00000000000000:role/example-role --role-session-name example-role""" | |
| client = boto3.client('sts') | |
| account_id = client.get_caller_identity()["Account"] | |
| print(account_id) |
Just a quickie test in Python 3 (using Requests) to see if Google Cloud Vision can be used to effectively OCR a scanned data table and preserve its structure, in the way that products such as ABBYY FineReader can OCR an image and provide Excel-ready output.
The short answer: No. While Cloud Vision provides bounding polygon coordinates in its output, it doesn't provide it at the word or region level, which would be needed to then calculate the data delimiters.
On the other hand, the OCR quality is pretty good, if you just need to identify text anywhere in an image, without regards to its physical coordinates. I've included two examples:
####### 1. A low-resolution photo of road signs