Launch HN: Innate (YC F24) – Home robots as easy to program as AI agents

164 points by apeytavin 5 days ago

Hey HN! We’re Axel & Vig, the founders of Innate (https://innate.bot). We build general-purpose home robots that you can teach new tasks to simply by demonstrating them.

Our system combines a robotic platform (we call the first one Maurice) with an AI agent that understands the environment, plans actions, and executes them using skills you've taught it or programmed within our SDK.

If you’ve been building AI agents powered by LLMs before, and in particular Claude Computer use, this is how we intend the experience of building on it to be, but acting on the real world!

You can see Maurice serving a glass here (https://bit.ly/innate-hn-vid-serving). Here is another example (https://bit.ly/innate-hn-vid-officer) in which it was given a digital ability (through the SDK) to send a notification to your phone, to use when it sees someone in the house. In both these cases, the only work you have to do is spend 30mn per physical skill to collect data to train the arm, and a couple minutes to write a system prompt.

You can read more about how it works, about the paradigm we’re creating and find our Discord in our documentation (https://docs.innate.bot). We’ll be open-sourcing parts of the system there soon.

We want to lower the barrier to entry to robotics. Programming robots is usually complicated, time-consuming, and limited to experts even with AI helping you write code. We think it should be easier.

We’re coming from an AI for robotics and HCI background as researchers at Stanford, and we’ve worked on multiple hardware + agentic AI projects this past year, but this one was clearly the most surprising one.

The first time we put GPT-4 in a body - after a couple tweaks - we were surprised at how well it worked. The robot started moving around, figuring out when to use a tiny gripper, and we had only written 40 lines of python on a tiny RC car with an arm. We decided to combine that with recent advancements in robot imitation learning such as ALOHA to make the arm quickly teachable to do any task.

We think it should be simple to teach robots to do tasks for us. AI agents offer a completely new paradigm for this, easy enough to help many non-roboticists start in the field, but still expandable enough to make a robot able to do very complex tasks.

The part of it that excites us most is that for every builder teaching their robot to perform a task, every other robot learns faster and better. We believe that by spreading our platforms as much as possible, we could crowdsource massive and diverse datasets to make robotics foundations models that everyone contributes to.

Under the hood, our brain (running in the cloud) uses 9 different models and runs in the cloud. A YOLO, a SAM, and 7 VLMs, from OpenAI, Google, Anthropic, and most importantly a couple Llamas running on groq to make the system think and act faster. Each model has a responsibility. Together, they act as if it was only one model with ability to navigate, talk, memorize and activate skills. As a bonus, since these models keep getting better and smaller, every new release gets our robots smarter and faster!

Our first robot Maurice is 25cm-high, has a 5DoF arm, a Jetson Orin Nano onboard, and comes equipped with our software installed and a mobile app to control it. Our first batch of users wants to teach it to clean floors, tidy up after kids, wake them up in the morning, play with them, or be a professional assistant connected to emails and socials. You can go wild quickly!

We’re making a small batch available for HackerNews at $2,000 each for early builders who want to experiment, with $50 free of agent per month for a year. You can book one on our website with a (refundable) deposit if you’re in the US. These units will start shipping in March - the first 10 were booked already for February.

We’d love your thoughts, experiences, and critiques. If you have ideas on what you’d use a home robot for, or feedback on how to make these systems more accessible, we’re hanging around for the coming hours in the comments. Let us know what you think!

bramd 5 days ago

Congrats on the launch.

Have you thought about assistive technology/accessibility tasks as well? Would love to use such a device to control the touch screens on inaccessible coffee machines at my clients offices for example that I can't operate without sight. I'm sure there are way more examples of such things.

Throwing complex robots at inaccessible devices is not the proper solution, but by far the most quick and practical one. Not in the US, so not even able to buy one and I'm also hesitant to buy something that is totally bricked when the company/cloud goes under.

  • apeytavin 5 days ago

    That's a great idea! We thought about in the context of elder care where they could ask the robot to perform a task for them, but we first need the models to be a little better - hence why we start here, to collect the data before it spreads further.

    And by the way, we already have the app that you can use to control the robot at distance, so you can use the skills you taught it remotely as you make it navigate your home!

    On the fact it would get bricked if the company goes under, note that our agent runs on other clouds so it's very easy to run if the company goes under - we would open-source it. But if you're not in the US we can't easily ship it to you for the first batch anyway :)

    • whtsthmttrmn 5 days ago

      > We thought about in the context of elder care where they could ask the robot to perform a task for them, but we first need the models to be a little better - hence why we start here, to collect the data before it spreads further.

      I hope you continue this work for the foreseeable future, because this would be such a boon if it all pans out well.

      • apeytavin 5 days ago

        Thank you, yes there's a lot of positive that can come out of this technology, and it needs to be developed with the help of everyone in order to get there

    • bramd 5 days ago

      Thanks, will keep an eye on your progress. If you want to discuss this kind of use cases in the future, feel free to get in touch.

pj_mukh 5 days ago

Love this. Thoroughly interested in jumping on this wagon.

Just throwing an idea out there: Instead of re-inventing the proverbial wheeled base, I wonder if it would make sense to build on top of some established wheel bases, like the turtlebot (https://www.turtlebot.com/). And then go ahead and build an array of manipulator/grasper options on top.

It’ll get you an established supply chain (including of replacement parts) and let you focus in on your specialty here: the reconfigurable agentic behavior stack.

It would be a no-brainer for me to jump on this and start building if the base was a turtlebot!

  • apeytavin 5 days ago

    The mobile base below the hull is a turtlebot 3 :) The arm is also a modified version of the open source arm by Alexander Koch

    We do a lot of it with off-the-shelf components, and we design the whole system so that we can quickly iterate, manufacture and ship from our garage in Palo Alto.

    • pj_mukh 5 days ago

      Ohoho no way. Christmas comes early this year!

      Edit: Turtlbot 3 Burger I suppose? Or Waffle?

rapjr9 5 days ago

Is this stable and fast enough that I could hand it a camera and train it to point that camera at myself as I move around playing a musical instrument to create a music video?

  • apeytavin 5 days ago

    TL;DR: Yes!

    You're not the first person to ask me this! If you look closely, there is actually a camera on the arm (that's partly how it learns tasks so fast) and we can use it to take pictures too. You can definitely define a primitive that would be "take picture", another one that is "send to server" and then have your own software assemble it in the way you want. Or just record and send it to the cloud / your computer.

    Now if it's about using a better camera that the robot would hold, you'll need to wait for the next generation that we'll reveal later next year

exe34 5 days ago

this is very cool, I've been playing around in the same space with a simple tracked robot and a 2dof gripper. you seem to be quite a bit ahead of me in functionality.

https://imgur.com/a/WAHUIjQ

I'm using PaliGemma2 and MobileSAM for the vision part and Gemma for the thinking part. I'm hoping to stick with weights-available models as it's just a toy project.

for what it's worth this contraption cost under £200, but I'm using a desktop and a 3090 as the brains.

  • apeytavin 5 days ago

    Super cool, congrats man!

    This is how it started for us too! Check this out: https://x.com/ax_pey/status/1853462975216234851

    And like you did, a SAM + VLM is the first thing we tried and it felt high-potential already. It takes a lot of software work to put the right pieces together though, but we think we now ended up with something promising, scalable and extendable for a lot of people.

    And on the price: same, our initial prototype was around $250 but I had to connect it to my computer. It's unclear to many others in the field whether we'll be able to offload compute with latency low enough to a computer somewhere else in the house or even in the cloud. In the meantime at least, we decided to have onboard compute so that you can get started quickly. Even for you it would be useful, just because we did the work of putting all the hardware and electronics together, it's a pretty good computer onboard :)

    • exe34 5 days ago

      I forgot to mention, there's a raspberry pi 4ish on board, but yes, latency is something I'm trying to optimise for right now :-D

      • apeytavin 5 days ago

        Same for us back then! If I did it today though I would love to try using a RPi 5, these look incredible. But honestly NVIDIA just released their new Jetson Nano Super for 250$ and I think at this point it's a no-brainer to use this instead of an rpi.

hn_user82179 5 days ago

very cool! Both demos were very entertaining and charming. Could you share other behaviors/tasks that you foresee Maurice being able to tackle? I personally have trouble brainstorming tasks that I would find useful.

  • apeytavin 5 days ago

    For sure! We see it as the perfect blend between physical and digital abilities, so tasks can be chained together very creatively.

    For this small form factor (25cm high), here is what we foresee (and it's a small subset of what's possible).

    - Picking up trash and moving it away.

    - Tidying up the floor after your kids (moving toys away), so that a roomba could clean - or a version of Maurice with a vacuum.

    - Watching for burglars / other surveillance tasks, and sending you an email or notification if it perceives someone or something fishy.

    - Greeting you in the morning, waking you up, telling you about your emails.

    - Playing with you or your kids: Hide and seek, Tag, Simon Says, Easy board games... Maybe this one is a bit of a stretch at first, but I could see it play Connect Four?

    - Taking pictures. I had multiple folks tell me they would like the robot to go around at home events, take pictures, and assemble them / put them somewhere in the cloud after. This should be pretty straightforward to do!

    - Check in on your kids or elders frequently, see if they are okay. Especially this second one is very important to us. You could then define a primitive to send you an email or notification is something's wrong. Or a daily recap of the people did, with pictures. And ask the robot remotely to do something.

    For many of these use-cases we will post videos in the coming months about how to do them, as the AI and software gets more reliable.

    For bigger robots running on the same system we develop (we have a bigger robot to reveal later), that would be:

    - Folding laundry, putting dishes in the dishwasher, cleaning countertops, tidying up the place... Chores-type tasks

    - Service robot bringing you food and drinks

    - Cooking. This one will come once it's safe enough, you want to be careful especially when there's heat involved, or liquids.

    Sky is the limit for this technology, but the bottleneck is data. The more data there is, the faster it will learn and the more complex the tasks. We can't promise all of these are possible now, but it will get there faster than people think. You can look at research papers from our labs, like Universal Manipulation Interface or ALOHA, for existing physical use-cases.

Animats 4 days ago

TaskType: Action with arm: pick_up_glass

Is that preprogrammed, or is the LLM doing that?

  • apeytavin 4 days ago

    The LLM is deciding to use that skill at that time. The skill itself was trained beforehand by the user like shown in the docs, and the agent has information about what kind of context the task can be used in and recognizes that here is the right moment

    EDIT: I say LLM but it's really not just an LLM. From the outside perspective it looks like it because we decided to do so, but under the hood it's multiple models (as described in the post)

BrandiATMuhkuh 5 days ago

Congratulations on the launch.

This is pretty cool. I really like the simplicity.

While I was doing my PhD in HRI (~7 years ago), I played around with robots (mostly NAOs) to navigate and manipulate the real world. It was cool but really cumbersome.

I wish you all the best. Great UX is the key.

  • apeytavin 5 days ago

    As an HCI person, really glad to hear that! I think there's a LOT to explore here indeed, and it's they key to democratization. We have a lot of ideas to reveal in the coming months, such as teleoperation with just the phone, VR, web-sims to experiment without buying first...

    It's also pretty rare to find HRI people, so I'm very happy to chat further if you're interested (there's a Discord link in the docs and on the website)

aanet 4 days ago

Congrats on the launch! Looks fantastic.

I've been thinking about exactly this kind of architecture (vision-language models + physical robot ==> performing tasks)

I'd love to tinker with one (or more) of these 'Bots.

Question: Is the entire inference in the cloud? or on the Bot's hardware?

  • apeytavin 4 days ago

    It's split between the bot and the cloud! High-level decision making and training is performed in the cloud for performance reasons, and low-level control and inference of VLAs is performed on the robot which has a GPU onboard!

    And yeah it's a powerful architecture, but it's not enough if you want to perform task, the VLMs orchestrate but you need another model for manipulation. And we put all of these together :)

    Happy to chat further!

Oras 5 days ago

Looking really cool. The preorder stripe page says $300 but it says deposit. What’s the actual price when available?

  • apeytavin 5 days ago

    It also says $2,000 on stripe and on the website right?

    Thanks for the comment though, means that we should make it clearer maybe?

    • Oras 5 days ago

      It doesn’t on mobile, just checked again (browsing from the UK)

spieswl 5 days ago

Congrats on the launch.

How much did you learn from the lessons of other contemporary robotics frameworks that are out there? Do you envision focusing in on particular types of tasks later, or is it still uncertain how your robot design will evolve as the dataset grows?

  • apeytavin 5 days ago

    What do you mean by framework? The underlying OS like ROS (that we use)? The algorithms manipulation, navigation, decision-making?

    On types of tasks, we're envisioning chores first. So laundry, cleaning, tidying up, dishes etc... That should be the primary focus for this category of robots. But we're very open to other tasks, we know some folks have expressed desire for more interactive kinds of tasks. In another comment I described multiple categories of tasks that folks have expressed interest in. You can find it by searching for "here is what we foresee" on this page

saturatedfat 5 days ago

Really cool!

Had a couple questions:

- how far does the $50/mo get you?

- what's the battery life like/going to be like?

- do yall allow people or plan to allow people to swap out (some) of the models or orchestrate them yourself, at least incrementally? say I want to fine-tune my own maurice personality

  • apeytavin 5 days ago

    - We estimate that in average it would use $1 to $3 a day and we picked that number so that it would be basically be free for you!

    - Battery life is like 3-4 hours on a very high intensity usage. Most likely that should last a whole day once we start optimizing :) It's a big battery for now

    - Yes we will allow full control. We don't intend to lock folks in our ecosystem, and if you look at the SDK you can see that you could actually train your own policies and just trigger them on the primitives. You could even scrap the whole agent, but then we think you lose a lot of the value of it, but why not?

    Apart from the cloud agent, most of the code will be open source anyway!

  • apeytavin 5 days ago

    Oh and on changing the personality I believe this could be done separately yes, the important reason why we keep our agent in the cloud is that it's going to improve quickly on decision-making with folks using it, but personality has little to do with that at the end of the day. Even inside the system, the model that decides what to say is separate!

buttofthejoke 5 days ago

Very cool! Are you planning on building out (or offering direction on) 'base machines'? Would love to see a plug and play hardware infrastructure, where plopping in a new brain/etc for development purposes is a thing.

  • apeytavin 5 days ago

    Happy to learn more about what you have in mind, there's a couple different things here it seems?

    Maurice is our base machine. We're going to open source the design and hardware so that users can change it - for example switch the gripper with something more complex or a different end-effector. Is that what you meant?

    We're also working on a bigger one, which will be able to reach tabletops for example. The idea is as much to create a really powerful robot for the home as showing our brain can work in different bodies.

    Ideally our software should power all kinds of robots easily but it needs a lot more training data, and right now it still plugs in standard classical robotics software for some tasks, because it can't control everything immediately. But for example, provided they give us the right interface, iRobot and other Roomba manufacturers should be able to use our AI to make their robots instantly smarter. We can also put it in Unitree dogs (go2 for example). But one thing at at time :)

whtsthmttrmn 5 days ago

If things go south, the world must rely on Will Smith to save us from the machines. He's the only one with the appropriate training.

  • beefnugs 5 days ago

    I dont know, i think metal is slap resistant

  • apeytavin 5 days ago

    I'll personally equip our team with EMP weapons when that happens

smokel 5 days ago

I am happy for your launch and wish you a good deal of luck.

Personally I am really disappointed with the idea of requiring a subscription for a home robot, though. When I was younger I envisioned a home robot to have a brain inside itself, or at least in the home. Alas, it seems the world is developing in another way.

  • apeytavin 5 days ago

    I understand, and trust me it bums me too, but it's just that the compute required to do everything onboard is not there.

    Once it is, we'll be able to get rid of it and do everything on edge. In theory it's already possible, but the time it takes to run large models would make the robot look inactive most of the time, and not even remotely reactive enough to handle the real world.

    But really, that's our goal

w10-1 5 days ago

The site needs at least an email address for order feedback - you've got customers now :)

  • apeytavin 5 days ago

    Added in the FAQ, thank you!

  • apeytavin 5 days ago

    Oh sh*t you're right, the FAQ disappeared during the last redesign!

cpach 4 days ago

‘F24’? Was this a typo? I’ve only ever seen W for Winter and S for summer :)

  • apeytavin 4 days ago

    No YC does 4 batches a year now, and they're smaller batches.

    The Fall one just ended, and we were 93 companies presenting at Demo Day (vs ~250 during the summer).

    • cpach 4 days ago

      Aha! TIL.

ZYbCRq22HbJ2y7 4 days ago

Why not show how much the video is sped up in the demo on your landing page?

  • apeytavin 4 days ago

    The coefficient of speeding up is not the same at different moments in the edit and it varies dynamically so I couldn't figure out how to make it change dynamically too. It's also distracting and we want folks to focus on the chaining of difficult tasks for now, not necessarily the speed, which will increase soon but was not our focus (for now).

    I think though that it's very clear that it's sped up a lot at some moments. Plus, you can sort of see it at the speed of the messages on the left.

    But you're right, for the sake of transparency we'll figure out a way to show it better.

    EDIT: For the sake of transparency I can already tell you that it's sometimes sped up to 10x, mostly when it's navigating cause the robot hardware is slow. And by the way, the version we'll ship in February has a much faster drive train.

ceritium 5 days ago

Please, create a robot arm which I would instruct for iron and fold my clothes.

  • apeytavin 5 days ago

    My cofounder is 2 meters away from me working on exactly that :)

    The first, small platform you see here is a first product but we're aiming bigger

    • cik 4 days ago

      This is my #1 use case now that something's been vacuuming my floor for 15 years, and washing for ~3. Add this labour saving device and I'm immediately buying, and selling to others.

      • apeytavin 4 days ago

        Happy to read that, and looking forward to that moment!

mritchie712 5 days ago

small nit, but why is the bot stopping so far from you? maybe it's the camera angle, but it looks like it wouldn't get within 10ft of either of you.

maybe for safety?

  • apeytavin 5 days ago

    It might seem bad but that's actually one of the coolest things about this new approach: it's the core model (today GPT-4o) that decides where it goes.

    Here, this was a suboptimal decision by Maurice, and by default we indeed have it avoid making costly mistakes. But consider all the good decisions the agent did otherwise: navigating in all these different rooms with no prior knowledge of where anything is (just pictures it took earlier), close to the glass where Vignesh was, back to Axel, back to bed at the end...

    And here's the thing: every time an LLM provider releases a new model, Maurice gets better. We haven't even started fine-tuning the agent yet but that will also improve its decisions a lot. There's many many low hanging fruits to make it able to make better decisions, and we expect that in the coming months the system will quickly get smarter and faster.

jeisc 4 days ago

what happens if one tries to teach the robot to do criminal acts?

  • apeytavin 4 days ago

    We simply don't allow that.

    If someone wants to train a physical task the data is sent to us for training, so we would not allow this.

    And even if you somehow did that, the brain itself has knowledge of what it task is and what to do with it, and since it's running on very smart VLMs trained by the best labs, I expect they have protections naturally (on top of ours)

huragok 5 days ago

Can I bring my own hardware?

  • apeytavin 5 days ago

    Yes!

    We'll have to write a guide to explain how to plugin our AI agent, but it can work on any hardware for which the base controls display the right interface.

    If you want to chat more about what you need feel free to join our discord!

whalesalad 5 days ago

I'm actually shocked YC invested in this product. Who's buying a robot that they then need to sit down and program - only to do things like ... what? Water a plant? (with the caveat that the glass must be pre-filled with water and sitting in a spot for the robot to grab) I cannot think of this being able to do anything remotely useful and having the juice be worth the squeeze. Hate to be such a grump about it but really what are the real-world use cases? The website does absolutely nothing to show me why I might want or need this product. It's a scratch in search of an itch.

  • apeytavin 5 days ago

    That's some concrete feedback :)

    On your example: You can also teach it to fill the glass with water before going to the plant.

    And you're right, this is not even close to something the mass public would use, but that's not our goal for now. Right now this is for people who want to make robots, and the value proposition lies in the method to teach it and see what you can make of it. It's our job to make something that learns fast enough that it actually feels useful to our customers.

    I take from what you said that the website does not make it clear what the use-cases are, so we'll make an effort on that. Although it's a goal for us to market this clearly right now as a builder's tool, and I don't want to make it seem like we're already at mass-market capability.

    I added in another comment a list of use-cases that are already / will be possible, I can't put a link to it though but you can find it by searching on the page for "here is what we foresee" :)

  • outworlder 5 days ago

    Funny. I have a robot vacuum that I have to refill the water tank periodically and empty the dust bin. Even with your limited example, there's value in the automation. Thankfully they aren't restricted to just that. It's their first robot after all.

    • apeytavin 5 days ago

      Also big time yes. Many folks would pay big money for a dumb robot that does just 1 or 2 tasks well - and even with that... how good is a Roomba really? It misses places, it gets stuck, and there's still 10M sold every year.

      You want a robot that does the task well enough AND for which if it fails it's not a big deal if it's not too often.

      For bigger robots that can break stuff it will be too annoying, but for Maurice, I don't think so.

  • ukd1 5 days ago

    lol, everything starts somewhere - use some imagination.

    • whalesalad 5 days ago

      Truthfully I am trying! I know how hard it is to get a product off the ground especially something like this which is a hybrid of a hardware platform interacting with the physical world and software to operate it.

      The intruder demo is cool - but I have security cameras which can achieve an identical response (if not better, since they are at elevation, concealed, cannot be manipulated or disarmed or tossed into a closet with a closed door etc).

      If the company were to give me this device for free, and pay for me to take a month off of work to dedicate myself to engineering a program for it to run (I have been writing python ~20 years) I STILL could not come up with a compelling argument for it. At this point the best thing I can come up with would be covering it in a giant stuffed teddy bear and letting it run wild in my yard so my dog could chase it. But is that worth $2,000 and the opportunity cost of sitting down to program it? Absolutely not.

      I can see it being valuable to a middle/high school as a learning tool ... but to the layman absolutely not. It is a niche, low-volume business at best.

      • apeytavin 5 days ago

        Another great point you make here, and that I agree with is:

        Indeed some of these use-cases are already possible for cheaper and faster and better with other solutions. But each of these requires you to install something new in your home, which is time and money. This platform, and the whole of general-purpose robotics, is about creating a product that will ultimately will do everything well enough that you the marginal gain of using something specialized is not worth the time to install it. And many use-cases like folding laundry or loading the dishwasher are not doable with anything else anyway.

        You also make great points on the fact it takes time to make it work, but that's just for the first robot. Once we have enough of these out there and enough data, the time required to do any of these tasks will be much smaller.

        It's already quite remarkable that today a consumer can teach an arm to grab a glass with a couple buttons when 3 years ago you would have had to ask a team of engineers to create a complex system to do that. So imagine where we'll be 3 years from now :)

        • stuart73547373 5 days ago

          genuinely impressed by your tact and quality of response to such a weirdly hostile message

          • apeytavin 5 days ago

            Haha thank you. But really, his message was full of very concrete remarks that help us get better at presenting what we do.

          • whalesalad 5 days ago

            What about my message is hostile? Serious question. I know I’m being critical but I’m not trying to be hostile.

            Have you ever been to an investor meeting? That is a true hostile environment. Consider this the dry run.

            • dang 5 days ago

              Leading with "I'm actually shocked YC invested in this product" in response to a startup launch strikes me as quite hostile. For example.

              People routinely underestimate the hostility in their own comments. You should probably multiple your perception of it by 10x to have a sense of how it's landing with the median reader. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

              • whalesalad 5 days ago

                I’m sorry dang but no, this is not hostility. Again a startup has a monumental mountain to climb, critical and authentic feedback is absolutely not hostile. It’s valuable food for thought.

                The link you referenced is looking for “objects in the mirror” I’m not sure how that’s related here.

                Frankly HN has become an Orwellian environment, perhaps after 14+ years I’m no longer welcome here.

            • apeytavin 5 days ago

              In my experience investors are actually too nice and don't tell you what they think.

              I'm European (French), I like folks that are direct and whom with we can discuss