Skip to content

Instantly share code, notes, and snippets.

@eonist
Created March 13, 2026 06:33
Show Gist options
  • Select an option

  • Save eonist/1587363408a1de9413c5321b2b797c5e to your computer and use it in GitHub Desktop.

Select an option

Save eonist/1587363408a1de9413c5321b2b797c5e to your computer and use it in GitHub Desktop.
ai assist with torque sensor

There's a lot of potential here, especially since you're already building custom firmware/controller logic and have deep experience with local AI. The torque sensor gives you a real-time force signal at ~100–200 Hz — that's a rich data stream most e-bike brands barely use beyond simple proportional assist. adventuro

Adaptive Rider Profile (on-device ML)

Run a lightweight model (TinyML / TFLite on the controller MCU, or on a companion ESP32) that learns your pedaling signature over time: tamobykesport

  • Builds a personal torque-cadence-speed profile across different terrains and conditions.
  • Automatically adjusts the assist curve without you touching PAS levels — it just knows you want more on climbs and less on flats.
  • Could run entirely offline, fits Omera's local-first philosophy. adventuro

Predictive Terrain Assist

Fuse torque sensor data with GPS + altimeter to anticipate what's coming: tamobykesport

  • Pre-load motor torque 1–2 seconds before a hill hits, so assist feels seamless.
  • Learn your commute routes and pre-cache optimal power curves per segment.
  • Optimize battery spend: "You have 14 km left, 2 hills — here's the power budget."

Rider Fatigue Detection

This is the novel one. The torque signal contains subtle patterns that shift as you fatigue: hezzobike

  • Pedal stroke asymmetry — left vs. right leg force imbalance increases when tired.
  • Torque variability — consistent riders get jerky when fatigued.
  • Cadence drift — natural RPM drops without the rider noticing.
  • An on-device model could detect fatigue onset and silently increase assist to keep you safe, or nudge you to take a break.

Riding Analytics / Training Mode

Log torque + cadence + speed at high resolution and pipe it to a companion app: smartybikes

  • Power output estimation (torque × cadence = watts) without needing a power meter.
  • Stroke-by-stroke analysis like what cyclists pay €500+ for with Garmin/SRM.
  • "Ghost mode" — ride against your own previous best on a route.

Anti-Theft / Rider Authentication

The torque signal is surprisingly biometric: hezzobike

  • Everyone has a unique pedal stroke signature (force curve shape, L/R balance, cadence preference).
  • Train a small classifier on your stroke pattern.
  • If someone else pedals the bike → motor doesn't engage. No keys, no app, just pedal and it recognizes you.

What's Realistic for Omera v1

Feature Complexity Hardware needed
Adaptive assist curve Medium Torque + cadence + existing controller
Predictive terrain Medium-high + GPS module (ESP32 + u-blox)
Fatigue detection Medium Torque signal only, needs ML model
Power analytics Low Just logging + app
Rider authentication Low-medium Torque signal + small NN on MCU

The power analytics + adaptive assist combo is probably the best bang-for-buck for v1 — useful immediately, differentiating vs. every other e-bike, and you can ship it with just the torque sensor you already have plus a small local model running on the controller. The rider-auth feature is a fun flex that fits Omera's anti-theft story too. tamobykesport

Want me to spec out the data pipeline for any of these (sensor → MCU → model → motor control)?

@eonist
Copy link
Author

eonist commented Mar 14, 2026

Here's a breakdown of where each feature from your Omera torque sensor AI spec can run — directly on the controller MCU/ESP32 ("on chip") vs. offloaded to an iPhone companion app.

On-Chip (MCU / ESP32 — TinyML)

These features need real-time, low-latency inference directly in the motor control loop and are well suited to TinyML frameworks like TFLite Micro running on your existing controller or a companion ESP32:[ppl-ai-file-upload.s3.amazonaws][arxiv]

  • Adaptive Rider Profile — The core assist-curve adjustment must happen on-chip because it directly modulates motor output in real time. A small quantized model (INT8) learning your torque-cadence-speed profile fits comfortably in the ~50–256 KB RAM of an ESP32-S3. This is your most critical on-chip workload.[blog.meetneura][ppl-ai-file-upload.s3.amazonaws]

  • Predictive Terrain Assist (motor pre-loading) — The actual "ramp motor torque 1–2 seconds before the hill" command must execute on-chip with sub-10ms latency. The MCU reads torque + GPS/altimeter and applies a pre-cached power curve.[embedded][ppl-ai-file-upload.s3.amazonaws]

  • Rider Authentication — A small neural-network classifier (~few KB) matching pedal stroke signatures runs at inference time on the MCU. If the stroke doesn't match, the motor simply doesn't engage — no phone needed, no network needed.[arxiv][ppl-ai-file-upload.s3.amazonaws]

  • Fatigue Detection (inference) — The real-time detection of pedal asymmetry, torque variability, and cadence drift from the 100–200 Hz signal needs to happen on-chip so the controller can silently increase assist immediately.[ppl-ai-file-upload.s3.amazonaws]

On iPhone (Core ML / Companion App)

These features involve heavier computation, richer UI, or data that benefits from the phone's Neural Engine and storage:codecentric+1

  • Model Training & Updates — While inference runs on-chip, training the adaptive profile and fatigue models is too heavy for an MCU. The iPhone can retrain/fine-tune models on accumulated ride data using Core ML, then push updated weight files back to the ESP32 over BLE.developer.apple+1

  • Predictive Terrain — Route Learning & Battery Budgeting — The "learn your commute routes and pre-cache optimal power curves" and "you have 14 km left, 2 hills — here's the power budget" logic involves GPS route history, map data, and optimization that fits naturally on the phone. The iPhone computes the plan, then sends segment-by-segment power targets to the MCU.[ppl-ai-file-upload.s3.amazonaws]

  • Riding Analytics / Training Mode — High-resolution logging of torque × cadence = watts, stroke-by-stroke analysis, and "ghost mode" comparisons all require storage, visualization, and a UI — that's purely a companion app feature.[ppl-ai-file-upload.s3.amazonaws]

  • Fatigue Detection (pattern analysis & alerts) — While the MCU detects fatigue in real time, deeper trend analysis ("you've been fatiguing earlier this week") and nudge-to-rest notifications live in the app.[ppl-ai-file-upload.s3.amazonaws]

Split Summary

Feature | On-Chip (MCU/ESP32) | iPhone (Core ML App) -- | -- | -- Adaptive assist curve | ✅ Real-time inference | Model retraining, profile sync Predictive terrain | ✅ Motor pre-load commands | Route learning, battery budget Fatigue detection | ✅ Real-time signal analysis | Trend analysis, break alerts Power analytics | Raw data logging only | ✅ Full UI, charts, ghost mode Rider authentication | ✅ Full inference on-chip | Enrollment/training of classifier

The clean split is: the MCU owns anything that touches the motor control loop in real time, while the iPhone owns training, planning, visualization, and anything that benefits from a screen or persistent storage. This also aligns well with Omera's local-first philosophy — the bike works fully offline on-chip, and the phone is an optional enhancement layer.arxiv+1[ppl-ai-file-upload.s3.amazonaws]

Here's a breakdown of where each feature from your Omera torque sensor AI spec can run — directly on the controller MCU/ESP32 ("on chip") vs. offloaded to an iPhone companion app.

On-Chip (MCU / ESP32 — TinyML)

These features need real-time, low-latency inference directly in the motor control loop and are well suited to TinyML frameworks like TFLite Micro running on your existing controller or a companion ESP32: [ppl-ai-file-upload.s3.amazonaws](https://ppl-ai-file-upload.s3.amazonaws.com/web/direct-files/attachments/139085844/bb55be50-c413-4e9d-952f-cac7b42f6ae3/ai-assist-with-torque-sensor.pdf?AWSAccessKeyId=ASIA2F3EMEYEZQL3QLAD&Signature=mv48M1ufPkE1GWsDeg%2FSh36zYnE%3D&x-amz-security-token=IQoJb3JpZ2luX2VjENj%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FwEaCXVzLWVhc3QtMSJHMEUCIGPii79vXsG%2FFsCyXfHjlBz0jiO2EhJZ2%2FXmZSBVfRIyAiEAzZE5diYrGeG%2BzTAiWp4SaiNl5FSkBl3oBzI3CevZeUsq%2FAQIof%2F%2F%2F%2F%2F%2F%2F%2F%2F%2FARABGgw2OTk3NTMzMDk3MDUiDIIsau2IYdS3FNXguSrQBLb53Irq8eWrJsM22wS%2FWF3KnSGPNNEz3V%2FL6Wa%2FvJgWhsQrbFKMHwzLPdYvdqT0GJaFBwcgoyXN0GbmCCafll20HWYBoVdY9hVsnFeH7X1ZQTq%2FieV8Qcx4q8sVommoYjNPrggRAdEuMvZKt31w6PuCSnR3s5wF4sIgu3xeRo4wrSMGVSI9dWTmqf1KWI6zENit7b82f5jWVu4buypleBCj5UQt3ZTOkJYNn5NIS9ZDK%2BLlJEECYEuVg5DV1h7Dswwo%2B%2FzujSEUi9mjUFlvyZHFetaoKS%2Bn%2FQgwf9L9hCJNXEEQB%2BBwP6%2B6Ft5YZ9vXe3sv9dmoj3y2aEGUqzJYEpZbwkNsxrZdB7LEmDfGgnqXMVQc3d1nRwKM7lYOun3hOofiW0wsoanaWBz24fXcL7NrjpqIgueVJHWohkujiMBRBk111VW3vMLv76rxohMicFNdL9Mwg0bg9VCH%2BFNT3Q0iZocT9pngjf3MajD5Zt2joE%2BOBZ6KJpZp9AoqXRJRHxUNqnKFO4DVhTYWT4ZnGMvC%2FnOlwDbXDgPd9F%2FPq0sy0yfcrHgCuQj5v4uxVcWzUwbWnpmAiC6c18Kh6G%2B1ww9xYT%2FSVKODJU%2FHvBd5825v7kPAZcJITz9Y0KDLimpZjibZoDWJNsd6pcGLWnoMaHS6KVpwym%2BcqVhLv3VqvSIlJpXmwjRs%2FxiGDkxkyO6F4rgXWdeS6sMjiyRPCz0AXsuqjXPIaLL72AD5VQT0qRyO2hVjEg36E7auUwDkBm6k2MMYGoXvO2Tb%2BacXZDRpmTcw6avUzQY6mAGlnVa3wCfwQv0o9z3RHvrGyyJhj6mH5yOdYoSk%2FMGpzWeSXSMyQz810dYCYH2ZCvSlYEVMeL9W%2Bwhlh2dp1AR0%2Bm3xHhynJUAWO5QQFBLFzfSpBkDWHMFmlZdztxiM4oXjIW5UYADey%2FnayCrNdOq7RZjPeewFc4Iut5%2FDJ95d1sJO2NrzPTO0ei9NUnfeG9FeV2CywrqU7A%3D%3D&Expires=1773477946)

  • Adaptive Rider Profile — The core assist-curve adjustment must happen on-chip because it directly modulates motor output in real time. A small quantized model (INT8) learning your torque-cadence-speed profile fits comfortably in the ~50–256 KB RAM of an ESP32-S3. This is your most critical on-chip workload. blog.meetneura
  • Predictive Terrain Assist (motor pre-loading) — The actual "ramp motor torque 1–2 seconds before the hill" command must execute on-chip with sub-10ms latency. The MCU reads torque + GPS/altimeter and applies a pre-cached power curve. embedded
  • Rider Authentication — A small neural-network classifier (~few KB) matching pedal stroke signatures runs at inference time on the MCU. If the stroke doesn't match, the motor simply doesn't engage — no phone needed, no network needed. [arxiv](https://arxiv.org/html/2506.18927v2)
  • Fatigue Detection (inference) — The real-time detection of pedal asymmetry, torque variability, and cadence drift from the 100–200 Hz signal needs to happen on-chip so the controller can silently increase assist immediately. ppl-ai-file-upload.s3.amazonaws

On iPhone (Core ML / Companion App)

These features involve heavier computation, richer UI, or data that benefits from the phone's Neural Engine and storage: codecentric

Split Summary

Feature On-Chip (MCU/ESP32) iPhone (Core ML App)
Adaptive assist curve ✅ Real-time inference Model retraining, profile sync
Predictive terrain ✅ Motor pre-load commands Route learning, battery budget
Fatigue detection ✅ Real-time signal analysis Trend analysis, break alerts
Power analytics Raw data logging only ✅ Full UI, charts, ghost mode
Rider authentication ✅ Full inference on-chip Enrollment/training of classifier

The clean split is: the MCU owns anything that touches the motor control loop in real time, while the iPhone owns training, planning, visualization, and anything that benefits from a screen or persistent storage. This also aligns well with Omera's local-first philosophy — the bike works fully offline on-chip, and the phone is an optional enhancement layer. developer.apple

@eonist
Copy link
Author

eonist commented Mar 14, 2026

For your Omera e-bike controller running TinyML workloads (adaptive assist, fatigue detection, rider auth) alongside real-time motor control and BLE to the iPhone, the ESP32-S3 is the strongest all-around pick. Here's how it stacks up against the other realistic candidates.

The S3's headline feature is its vector processing extensions (PIE) on dual Xtensa LX7 cores at 240 MHz, which accelerate quantized neural network operations by roughly 40× compared to unaccelerated inference. Key specs:dronebotworkshop+1

  • 512 KB SRAM + optional 2–8 MB PSRAM

  • Built-in Wi-Fi 4 + BLE 5.0 (no separate radio chip needed for iPhone comms)

  • Rich ADC/SPI/UART for torque sensor input and motor PWM output

  • Proven TFLite Micro support: ~50–60 ms inference, ~240 KB model size, ~350 KB RAM usage for a quantized CNN[dev]

  • Costs ~€3–5 per chip, massive community and tooling

This comfortably handles your small classifier models (rider auth, fatigue detection) at 100–200 Hz sensor rates while leaving headroom for motor control on the second core.[zediot][ppl-ai-file-upload.s3.amazonaws]

Nordic nRF5340 — Best for Ultra-Low Power + BLE Quality

If you prioritize best-in-class BLE and ultra-low power consumption over raw ML throughput, the nRF5340 is worth considering:[jonpeddie]

  • Dual Arm Cortex-M33 (128 MHz app core + 64 MHz network core)

  • Excellent BLE 5.0 with Thread/Zigbee multi-protocol

  • Edge Impulse partnership for rapid TinyML deployment — gesture models run in 1.7 ms using ~4 KB[hackster][youtube]

  • Lower ML ceiling than the S3, but sufficient for small sensor classifiers

The trade-off is less community momentum for motor-control applications and a more constrained toolchain compared to ESP-IDF.[edgeimpulse]

ESP32-P4 — Future Upgrade Path

The P4 is Espressif's 2026 powerhouse: dual RISC-V at 400 MHz, 32 MB PSRAM, AI instruction extensions, and hardware H.264 encoding. However:linkedin+1

  • It requires a separate ESP32-C6 co-processor for Wi-Fi/BLE — adds board complexity

  • Designed for vision/multimedia, which is overkill for torque sensor ML

  • ESP-IDF support is mature but Arduino support is still limited[dronebotworkshop]

This makes sense if Omera v2 adds a camera (e.g., dash-cam or gesture control), but it's overengineered for v1.

STM32N6 — Overkill

The STM32N6 packs a dedicated Neural-ART NPU delivering 600 GOPS at 800 MHz Cortex-M55. It's designed for on-chip computer vision and audio — far beyond what torque-sensor classifiers need, and significantly more expensive.[embedded]

Recommendation for Omera v1

Factor | ESP32-S3 | nRF5340 | ESP32-P4 -- | -- | -- | -- ML inference speed | ~50–60 ms [dev]​ | ~1.7 ms (tiny models) [youtube]​ | Faster (400 MHz) [linkedin]​ BLE quality | Good (BLE 5.0) | Excellent (BLE 5.0) | Via co-processor [dronebotworkshop]​ Motor control I/O | Excellent | Moderate | Excellent Power consumption | Moderate | Very low | Higher Cost per chip | ~€3–5 | ~€5–8 | ~€8–12 TinyML ecosystem | TFLite Micro, mature [dev]​ | Edge Impulse, growing [hackster]​ | TFLite Micro, newer Board complexity | Single chip | Single chip | Needs C6 co-processor

Go with the ESP32-S3 (N8R2 or N16R8 variant) for Omera v1. It gives you TinyML inference, BLE to iPhone, motor control GPIOs, and GPS module support (via UART to a u-blox module) — all on a single chip at a few euros. The N16R8 (16 MB flash, 8 MB PSRAM) gives you generous headroom for model storage and data buffering. You already know ESP-IDF from your Arduino/embedded work, so the ramp-up time is minimal.github+1

For your Omera e-bike controller running TinyML workloads (adaptive assist, fatigue detection, rider auth) alongside real-time motor control and BLE to the iPhone, the **ESP32-S3** is the strongest all-around pick. Here's how it stacks up against the other realistic candidates.

ESP32-S3 — Best All-Rounder (Recommended)

The S3's headline feature is its vector processing extensions (PIE) on dual Xtensa LX7 cores at 240 MHz, which accelerate quantized neural network operations by roughly 40× compared to unaccelerated inference. Key specs: dronebotworkshop

This comfortably handles your small classifier models (rider auth, fatigue detection) at 100–200 Hz sensor rates while leaving headroom for motor control on the second core. [zediot](https://zediot.com/blog/esp32-chip-series-comparison/)

Nordic nRF5340 — Best for Ultra-Low Power + BLE Quality

If you prioritize best-in-class BLE and ultra-low power consumption over raw ML throughput, the nRF5340 is worth considering: [jonpeddie](https://www.jonpeddie.com/news/nordic-semiconductor-expands-its-ai-footprint/)

The trade-off is less community momentum for motor-control applications and a more constrained toolchain compared to ESP-IDF. edgeimpulse

ESP32-P4 — Future Upgrade Path

The P4 is Espressif's 2026 powerhouse: dual RISC-V at 400 MHz, 32 MB PSRAM, AI instruction extensions, and hardware H.264 encoding. However: linkedin

  • It requires a separate ESP32-C6 co-processor for Wi-Fi/BLE — adds board complexity
  • Designed for vision/multimedia, which is overkill for torque sensor ML
  • ESP-IDF support is mature but Arduino support is still limited dronebotworkshop

This makes sense if Omera v2 adds a camera (e.g., dash-cam or gesture control), but it's overengineered for v1.

STM32N6 — Overkill

The STM32N6 packs a dedicated Neural-ART NPU delivering 600 GOPS at 800 MHz Cortex-M55. It's designed for on-chip computer vision and audio — far beyond what torque-sensor classifiers need, and significantly more expensive. embedded

Recommendation for Omera v1

Factor ESP32-S3 nRF5340 ESP32-P4
ML inference speed ~50–60 ms dev ~1.7 ms (tiny models) [youtube](https://www.youtube.com/watch?v=I8KYwWOlCe8) Faster (400 MHz) linkedin
BLE quality Good (BLE 5.0) Excellent (BLE 5.0) Via co-processor dronebotworkshop
Motor control I/O Excellent Moderate Excellent
Power consumption Moderate Very low Higher
Cost per chip ~€3–5 ~€5–8 ~€8–12
TinyML ecosystem TFLite Micro, mature dev Edge Impulse, growing hackster TFLite Micro, newer
Board complexity Single chip Single chip Needs C6 co-processor

Go with the ESP32-S3 (N8R2 or N16R8 variant) for Omera v1. It gives you TinyML inference, BLE to iPhone, motor control GPIOs, and GPS module support (via UART to a u-blox module) — all on a single chip at a few euros. The N16R8 (16 MB flash, 8 MB PSRAM) gives you generous headroom for model storage and data buffering. You already know ESP-IDF from your Arduino/embedded work, so the ramp-up time is minimal. github

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment