Claude DesktopのOption+Space(クイックエントリー)を使用する際、日本語IMEの変換確定のEnterキーで誤ってメッセージが送信されてしまう問題を解決するKarabiner-Elementsの設定です。
{
"title": "Claude Desktop IME Fix for Japanese Input",
"rules": [| #!/bin/bash | |
| SCRIPTNAME=$(basename "$0") | |
| function realpath () { | |
| f=$@; | |
| if [ -d "$f" ]; then | |
| base=""; | |
| dir="$f"; | |
| else | |
| base="/$(basename "$f")"; |
This worked on 14/May/23. The instructions will probably require updating in the future.
llama is a text prediction model similar to GPT-2, and the version of GPT-3 that has not been fine tuned yet. It is also possible to run fine tuned versions (like alpaca or vicuna with this. I think. Those versions are more focused on answering questions)
Note: I have been told that this does not support multiple GPUs. It can only use a single GPU.
It is possible to run LLama 13B with a 6GB graphics card now! (e.g. a RTX 2060). Thanks to the amazing work involved in llama.cpp. The latest change is CUDA/cuBLAS which allows you pick an arbitrary number of the transformer layers to be run on the GPU. This is perfect for low VRAM.
08737ef720f0510c7ec2aa84d7f70c691073c35d.iOSの設定 > プライバシー > ヘルスケア > COVID-19接触のログ記録 > 一番下の 接触チェックの記録を書き出す > AirDropやメールでPCに送る。
| """ | |
| Check out STEAM Powered (https://steampoweredshow.com/) where I have conversations | |
| with women in STEAM to learn a bit about what they do and who they are. | |
| https://www.steampoweredshow.com/learn-more | |
| """ | |
| from pprint import pprint | |
| from collections import OrderedDict | |
| import sys | |
| import re |
| // Copyright (c) 2019 aNoken | |
| #include <M5StickC.h> | |
| HardwareSerial serial_ext(2); | |
| typedef struct { | |
| uint32_t length; | |
| uint8_t *buf; | |
| } jpeg_data_t; | |
| jpeg_data_t jpeg_data; |
| import lcd | |
| import utime | |
| import sys | |
| from machine import I2C | |
| from Maix import GPIO | |
| from fpioa_manager import * | |
| i2c = I2C(I2C.I2C0, freq=400000, scl=28, sda=29) | |
| # And a short delay to wait until the I2C port has finished activating. | |
| utime.sleep_ms(100) |
Combining TensorFlow for Poets and TensorFlow.js.
Retrain a MobileNet V1 or V2 model on your own dataset using the CPU only.
I'm using a MacBook Pro without Nvidia GPU.
MobileNets can be used for image classification. This guide shows the steps I took to retrain a MobileNet on a custom dataset, and how to convert and use the retrained model in the browser using TensorFlow.js. The total time to set up, retrain the model and use it in the browser can take less than 30 minutes (depending on the size of your dataset).
Example app - HTML/JS and a retrained MobileNet V1/V2 model.
| <!DOCTYPE html> | |
| <html> | |
| <head> | |
| <meta charset='UTF-8'> | |
| <meta http-equiv='X-UA-Compatible' content='IE=edge'> | |
| <meta name='viewport' content='width=device-width, initial-scale=1'> | |
| <style> | |
| * {margin: 0} | |
| </style> |
| """ | |
| Convert YouTube subtitles(vtt) to human readable text. | |
| Download only subtitles from YouTube with youtube-dl: | |
| youtube-dl --skip-download --convert-subs vtt <video_url> | |
| Note that default subtitle format provided by YouTube is ass, which is hard | |
| to process with simple regex. Luckily youtube-dl can convert ass to vtt, which | |
| is easier to process. |