Foundation for AI Ethics & Safety Research

AI Research & Development
for Real Problems

FAESR is a nonprofit R&D company. We build open-source AI tools, study how they change the world around us, and publish what we learn. Based in Virginia's Shenandoah Valley.

Who We Are

The Foundation for AI Ethics & Safety Research is a nonprofit research and development company. We build AI tools, study their implications, and work to make the technology more useful, more understood, and more accessible.

R&D means the full scope of it. We write code. We publish research. We think about policy. We ask hard questions — not the hype-cycle ones like “what happens when robots take all the jobs,” but the ones that matter in practice: What does responsible data collection look like when your AI assistant needs deep access to function? How do you build cognitive tools for people who actually need them? What are the real tradeoffs between local processing and cloud access, and when does each one make sense?

We're not selling a vision of AI. We're building tools, using them, learning from them, and sharing what we find.

What We Believe

AI should run on your hardware.

Your data, your models, your machine. Cloud dependency is a choice, not a requirement. We build local-first systems that keep control where it belongs — with you.

Open source is non-negotiable.

If you can't see how it works, you can't trust it. Every tool we publish ships with its source code. We believe transparency isn't a feature — it's a foundation.

Complexity should be accessible.

Powerful tools shouldn't require a PhD or a six-figure infrastructure budget. We design for makers, tinkerers, and independent builders who want to use AI without giving up ownership of their work.

Ethics are demonstrated, not declared.

Manifestos are easy. Building technology that actually respects privacy, runs without surveillance, and solves real problems — that's the hard part. That's what we do.

How We Work

We don't start with a roadmap and execute top-down. We start by building, and we let the work tell us what it needs.

Our development philosophy is iterative and deliberate. We build a tool. We use it. When it can't do something, we note it and keep going. We keep noting limitations until we reach a point where the project can't move forward without solving them — and then we solve them. This means our tools grow organically from actual use, not from spec sheets or feature wish lists.

Local-first is our starting position, not a purity test. Our tools collect data — sometimes a lot of it. That's often the whole point. The question isn't whether your AI assistant gathers information; it's who controls that information and where it lives. We default to keeping everything on your hardware, under your control. If a tool eventually needs broader access, we approach that deliberately and transparently — not because it was the path of least resistance.

The rest of the industry can race to AGI. We're focused on building things people can use today, improving them based on how they actually get used, and doing it in a way we can stand behind.

What We Do

Build Tools

We develop open-source AI systems for practical applications — maker workflows, health data management, cognitive assistance, autonomous systems. Our tools are designed to run on personal hardware and put the user in control of their own data.

Research

We study the real-world implications of AI technology. How do these systems actually change the way people work, think, and make decisions? What are the genuine risks, and what's just noise? We publish our findings and contribute to the broader understanding of where this technology is headed.

Policy & Ethics

We think critically about how AI should be built, deployed, and governed. Not in the abstract — grounded in the practical experience of building and shipping real systems. When you've dealt with the actual tradeoffs, you bring something different to the conversation.

Projects

Moose

Local-First AI Engineering Assistant

Moose is an open-source AI assistant built for makers, engineers, and tinkerers. It integrates with the tools you actually use — 3D printers, CAD software, microcontrollers, code editors — and runs on your machine.

Moose uses a multi-agent architecture with specialist routing: different problems get routed to different expertise. It supports local LLM backends including LM Studio, Ollama, and llama.cpp, so you choose the models you trust.

Moose is also how we develop Moose. We use it daily, document where it falls short, and build the next iteration based on what it actually needs — not what sounds good on a features page.

Supports: Bambu Lab · Blender · FreeCAD · Arduino · Python · Shell

About the Founder

Finch Behnett

Founder, FAESR

Finch Behnett is an engineer and maker based in Virginia's Shenandoah Valley. His background spans satellite imaging algorithms, network engineering, embedded systems, and emergency response — he holds FF1/FF2, Haz-Mat, and Swift Water Rescue certifications as a volunteer firefighter.

Finch founded FAESR because he believes the best way to demonstrate what AI should look like is to build it. Not debate it endlessly, not hype it, not fear it — build it, use it, and show people what's actually possible when the technology is developed with care.

Get Involved

FAESR is open source and community-driven.

Use our tools.

Download Moose, use it, break it, tell us what's wrong.

Contribute.

Pull requests, bug reports, documentation, and ideas are all welcome.

Follow the work.

We publish research and development updates regularly.