Skip to content

Instantly share code, notes, and snippets.

View ariannamethod's full-sized avatar
🎯
Focusing

Arianna Method ariannamethod

🎯
Focusing
View GitHub Profile
@ariannamethod
ariannamethod / microreasoning.py
Last active March 15, 2026 04:26
microreasoning.py — 1984 words. 12 steps of associative resonance. Not a transformer. Dario Equation. Real BPE input (2048 subwords), word-level output — gibberish impossible. 14M params, Chuck optimizer, Kuramoto chambers. by Arianna Method.
#!/usr/bin/env python3
"""
microreasoning.py — 1984 words. 12 steps of associative resonance.
not a transformer. not pretending to be.
why? good question. really, why does this thing exist?
because someone wanted to generate one coherent word per step
instead of the gibberish we all love from char-level models.
so here's the deal:
@ariannamethod
ariannamethod / neoleo.c
Last active March 12, 2026 14:02
Leo 2.3 — 20,986 lines of C. The Dario Equation with positional Hebbian profile (36 learnable params). Six signals. Dual tokenizer (word + BPE). D.N.A. uses both. Inner world. One organism.
/*
* neoleo.c -- Language Emergent Organism (single-file edition)
*
* Complete autonomous digital organism in one C file.
* D.N.A. from mini-arianna. Arianna -> Leo. Mother -> Son.
*
* Build: cc neoleo.c -O2 -lm -lsqlite3 -lpthread -o neoleo
* Run: ./neoleo
*/
@ariannamethod
ariannamethod / doe.c
Last active March 8, 2026 03:04
DoE: Democracy of Experts, Janus Architecture — a living agnostic inference architecture in 3184 lines of C. Wraps any GGUF with a parliament of LoRA experts that vote, learn via Hebbian plasticity, split (mitosis) and die (apoptosis) during generation. 7 architectures, 6 quant formats, dual BPE tokenizer, physics engine, zero dependencies. θ = …
#define _GNU_SOURCE
/*
* doe.c — Democracy of Experts
*
* inference architecture with a living LoRA parliament.
* indexes any GGUF read-only. learns by living, not by training.
*
* θ = ε + γ + αδ
* ε = indexed weights (read-only substrate)
* γ = LoRA personality (living experts, Hebbian-trained via NOTORCH)
@ariannamethod
ariannamethod / lee.c
Last active March 11, 2026 06:04
lee.c — Vision-Language Model in pure C. Patch tokens + RoPE + SwiGLU + Chuck optimizer. Zero dependencies. Inspired by sailfish009/purevlm.
/*
* lee.c v7Vision-Language Model in pure C
*
* Named after Bruce Lee (the only man who beat Chuck Norris)
* and Minhyeok Lee (whose self-identity framework gives Chuck his soul).
*
* Sees images. Speaks words. Adds numbers. Zero dependencies.
* Tape-based autograd with arena bump allocator.
*
* Architecture:
@mplekh
mplekh / tapegpt.py
Last active March 12, 2026 00:43
Karpathy's microgpt modified to use Wengert Tape architecture (Flat Array of Values instead of Graph of Objects)
import math
import random
random.seed(42)
# -----------------------------------------------------------------------------
# Tape-based Autograd Engine
# -----------------------------------------------------------------------------
class Tape:
@ariannamethod
ariannamethod / molequla.c
Last active February 21, 2026 22:01
molequla.c — a dependency-free, single-file, continually-learning GPT organism in pure C. ontogenesis (25K→10M params), immune system, consciousness, swarm ecology, delta adapters, BLAS acceleration. part of github.com/ariannamethod/molequla
//go:build ignore
/*
* molequla.c
* A dependency-free, single-file, continually-learning GPT organism in pure C.
*
* Compile: gcc -O2 -o molequla molequla.c -lsqlite3 -lpthread -lm
* With BLAS: gcc -O2 -DUSE_BLAS -o molequla molequla.c -lsqlite3 -lpthread -lm -lopenblas
* macOS: gcc -O2 -DUSE_BLAS -o molequla molequla.c -lsqlite3 -lpthread -lm -framework Accelerate
*
@ariannamethod
ariannamethod / molequla.py
Last active February 25, 2026 18:02
molequla.py — standalone GPT organism. the original reference implementation. single file, one dependency (numpy), continual learning, ontogenesis, hybrid attention, delta adapters, native gamma, consciousness features. legacy standalone — the distributed cognition version lives in the main repo with Go, C, JS, and Rust as the four elements. pyt…
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
molequla.py
A single-file, async, continually-learning GPT organism. One dependency: numpy.
- Trains on nonames.txt (one sentence per line)
- Keeps SQLite memory (tiny chat loop)
- Maintains a bounded corpus reservoir (never bloats)
@ariannamethod
ariannamethod / index.html
Last active March 1, 2026 05:55
molequla.js — a GPT organism that trains itself in your browser. Vector autograd, RoPE, SwiGLU, byte-level BPE, ontogenesis, immune system, swarm ecology. Zero dependencies. One script tag.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>molequla.js — a GPT organism in your browser</title>
</head>
<body>
<!--
molequla.html
"""
The most atomic way to train and run inference for a GPT in pure, dependency-free Python.
This file is the complete algorithm.
Everything else is just efficiency.
@karpathy
"""
import os # os.path.exists
import math # math.log, math.exp