UniBrain OS

The mind is the next operating system.

A foundational AI that lets any brain-signal device talk to any application.


Vision

Making brain-computer interfaces universally accessible

For forty years, brain-computer interfaces have lived in laboratories.

A few have crossed over — restoring speech to people locked in by ALS, returning control to people robbed of it by stroke.

Most who need one still cannot use one. Not because the hardware isn't ready. Because the software must be rebuilt for every person, every device, every task.

We are building the layer that removes that barrier — so that anyone who needs a brain-computer interface can pick one up and use it.

Think it. Done.


The Promise

Signal in. Action out.

A person thinks. A device acts. Everything between — the noise of raw neural signal, the privacy of cerebral data, the translation into intent, the handoff to the downstream application — happens inside UniBrain OS.

The wearer doesn't see it. The developer doesn't build it. The device maker doesn't maintain it.

One system, carrying every step from brain to behaviour.

The Android of brain-computer interfaces — and the intelligence inside it.


Technology

UniBrain OS Core

UniBrain OS is not a model. It is a system — three layers, each solving a problem that has kept brain-computer interfaces in the lab for four decades.

  1. 01Signal-to-Intent Foundation Model

    One model, pretrained across people, devices, and modalities. It learns the structure that recurs in every brain — so a new user, on a new device, is understood from the first minute, not the first hour.

  2. 02Neural Trust Layer

    Raw neural signals never leave the device. Only intent does. The model improves through privacy-preserving methods designed for the most sensitive data a person has.

  3. 03Edge–Cloud Orchestration

    Inference runs on the device you are wearing. Heavier computation, when needed, runs in the cloud — never with your neural signal attached. You keep the latency. You keep the privacy.


Why Now

An inflection point

Three curves are meeting for the first time.

Hardware has left the laboratory. Dry-electrode wearables, miniaturised implants, consumer-grade headsets — brain signals are no longer trapped in clinical rooms.

Foundation-model AI has become capable enough to learn the shared structure of human neural activity from the imperfect data we actually have.

Regulation has begun to describe what responsible decoding of neural signals must look like — in California, in Colorado, in the European Union.

Each curve, alone, is not enough. Hardware without general software is a demo. Models without privacy are a non-starter. Regulation without technology becomes a wall.

Together, they open a window. We are building into it.

Welcome to the era of thought computing.


Founders

Founders

JS

Dr Jingyuan Sun

Founder & Chief Scientific Officer

Assistant Professor of Computer Science at the University of Manchester. Author of more than thirty peer-reviewed publications as first or corresponding author on foundation-model AI for human brain representation, including work published at AAAI, NeurIPS, NAACL, EMNLP, and IEEE TNNLS. A decade of research across the Chinese Academy of Sciences, KU Leuven, and Manchester.

HZ

Dr Hongpeng Zhou

Co-Founder & Chief Technology Officer

Dame Kathleen Ollerenshaw Fellow (Assistant Professor) in the Department of Computer Science at the University of Manchester. Expertise in machine learning, Bayesian learning, interpretable ML, and AI for healthcare. PhD in Cognitive Robotics from TU Delft.


Scientific advisors

Scientific advisors

AC

Prof. Alex Casson

Professor of Biomedical Engineering, University of Manchester · Alan Turing Institute Fellow

Non-invasive bioelectronic interfaces, wearable EEG, low-power BCI.

MM

Dr Mustafa A. Mustafa

Senior Lecturer in Computer Science, University of Manchester · Affiliate, KU Leuven COSIC

Applied cryptography, privacy-preserving machine learning, federated learning, LLM safety.

GN

Prof. Goran Nenadic

Professor of Computer Science, University of Manchester · Alan Turing Institute Fellow · Director, UK Healtex Network

Healthcare natural language processing and clinical translation.

ZL

Dr Zhenhong Li

Lecturer in Robotics and Control, University of Manchester · EPSRC Fellow · Head, Neurorobotics Lab · IEEE Senior Member

Physical human-robot interaction, rehabilitation robotics, multimodal BCI for robot control.


Team

Team

  • Founder & Chief Scientific OfficerDr Jingyuan Sun·
  • Co-Founder & Chief Technology OfficerDr Hongpeng Zhou·
  • Senior ML Research EngineerOpen roleHiring
  • ML Research EngineerOpen roleHiring
  • ML Research EngineerOpen roleHiring

We are hiring. Get in touch → careers@zestneuron.ai


Research

Selected published research

2025
  • NeuralFlix: A Simple While Effective Framework for Semantic Decoding of Videos from Non-invasive Brain RecordingsJ. Sun, M. Li, M.-F. Moens. AAAI 2025
  • Multimodal brain decoding with pretrained foundation modelsJ. Sun, H. Li, V. Schlegel, Y. Sun. NeurIPS 2025
2024
  • Cross-subject brain-to-text decodingJ. Sun et al.. NAACL 2024
  • Computational Linguistics for Brain Encoding and Decoding: Principles, Practices and BeyondJ. Sun, S. Wang, Z. Chen, J. Li, M.-F. Moens. ACL 2024 Tutorial
2023
  • Contrast, Attend and Diffuse to Decode High-Resolution Images from Brain ActivitiesJ. Sun et al.. NeurIPS 2023

View all publications ↗