An open AI community for all
Hi there π -- we are building a collection of shareable building blocks (from silicon to open models) empowering humanity to own its own AI. We bring together open source hackers in the fields of computer architecture, ASIC design, advanced systems, and neural network compilers who are bold enough to think that you don't have to work for a mega corporation to build your own computers for AI. Our mission is to bring the spirit of Homebrew Computer Club back into vogue. The easiest place to find us is on our Discord or at FOSDEM.
AI Foundry is the open-source community platform stewarded by Ainekko, bringing open-source principles all the way down to silicon. We're building the foundation for flexible, software-defined AI systemsβfrom lightweight edge devices to high-performance inference platforms.
Our long-term vision is to help the industry evolve where model training and deployment are open, transparent, and accessible to everyone. We support practitioners who want to benefit from AI in their work and life, particularly in model fine-tuning and deployment.
We believe AGI can only be achieved through cat super intelligence and we grew up playing the worst Atari game ever created.
We don't attempt to re-invent the wheel and try to work directly with as many upstream communities as possible. We are eternaly grateful for ggml/llama.cpp, tinygrad, gcc, llvm and RISC-V (just to name a few) and we're not shy to use those building blocks in our end-to-end designs.
When we can't find appropriate building blocks in the open -- we're not shy to build them ourselves from scratch. That's why we're going all the way to chip design level and creating world's first fully open source, many core hardware architecture scalable from a few dozen processing units to 4096. We also have opinions about software side of AI inference servers and are trying to change the state of the art there as well.
Oh, and mark our words: Transputers are due for a huge comeback in the AI-centric world.
AI Foundry provides modular building blocks for AI infrastructure, including:
Just as Linux opened up operating systems and Kubernetes made cloud infrastructure composable, AI Foundry brings that same spirit of openness to AI hardware and tooling.
ET is an open-source manycore platform for parallel computing acceleration. It is built on the legacy of Esperanto Technologies ET-SOC1 chip.
On first approximation, the ET Platform is a RISCV, manycore architecture.
The ETSOC1 contains 1088 compute cores (called minions). Each minion has
two rv64imfc RISCV HARTs with vendor-specific vector and tensor extensions.
There's an extra RISCV core on board, called the Service Processor, that is used for chip bring up and runtime management.
For a full understanding of the ETSOC1 architecture check the ETSOC1 Programmer's Reference Manual.
Test prompts against multiple LLMs or versions, observe relative performance, and assess reliability through multiple runs. Supports both local and API-based LLM access.
Tools for improving AI transparency and accountability.
Techniques and implementations for efficient model compression and deployment.
AI Foundry is built for:
We host regular events including:
The AI industry faces several challenges:
AI Foundry addresses these by offering an open, flexible alternative optimized for AI inference, available under permissive licenses for developers to adapt and extend.
All AI Foundry projects are licensed under Apache License 2.0 unless otherwise specified.
Building the future of AI infrastructure, openly and together.