35 kLOC is quite a bit. I wonder how straightforward and maintainable this app ended up to be. This would require taking a look at the sources. While good Rails code tends to be very terse, frontend may be quite voluminous.
> I believe within a couple of months, when things like log tailing and automated testing and native version control get implemented
This sounds a bit too optimistic, especially around automated testing, but yes, eventually this all will be there.
> an extremely powerful tool for even non-technical people to write production-quality apps
But why would non-technical people would even think in terms of log tailing and version control, any more than they think about the gauge of wiring in their walls, or the kind of modulation their Wi-Fi device use? For really non-technical audience to make a good use of such tools, it won't just take the AI to be a competent coder. The AI should become a competent architect and a competent senior SWE to translate from the product management language to the software development language, without even surfacing it when not explicitly asked. It's going to be quite a bit of a challenge to make it work, and work about as reliably as with a human team.
Example scenario: you have a codebase that you iterated on with LLM, and it contains let's say 15 features with various implementation details. You continue tomorrow and want to make a small change to handle an edge-case. While making changes to one of the 3 required files, suddenly it will decide to also rewrite / "improve" other parts of the code that have nothing to do with your request, where pieces of previous logic will no longer exist - since it made 18 changes to the file, and there are 3 such files, good luck spotting it without thorough and detailed change review.
Also, if you made manual changes to generated code and than you ask it to add something to the code (within the same "conversation" / context, it will often replace your changes with how it originally wanted code to look like.
I have entire codebases of embedded software in C without the shortcuts of modern programming languages in way fewer than 35k lines
Not 35k lines, but there's one line that's about 48,000 characters long. So either they've intentionally obfuscated it, or chatgpt just churned out one long line
Edit: Running it through a prettifier the code comes about 33k lines
Edit: Run it through another prettifier, might turn into 35k lines. :D
I think people will have to recalibrate on this. The LOCs do things that you otherwise would not do. Features and details that simply would not happen — because they are too code/time intensive for most projects. It just won't matter anymore.
> But why would non-technical people would even think in terms of log tailing and version control
They won't! They won't have to. The obvious good stuff that everyone thinks the AI tool should be able do, will just work, because the people building the tools, will mostly obviously focus on making them work.
I can't really imagine producing that much code in that short amount of time and holding any amount of it in my head. I'd bet money there's code in there that does the same thing but different, leading to all kinds of little inconsistencies that make this code worthless in any serious context.
https://www.recipeninja.ai/recipe/r_ttOB5xyqpOLXCL/gluten-fr...
I believe I'm going to need a new oven...
On a more serious note: I've found that for debugging difficult issues, o1 Pro is in a league of it's own.
Claude Code's eagerness to do work will often fix things given enough time, especially for self-contained pieces of software, but I still find myself going to o1 Pro more often than I'd expect.
A coworker and I did a comparison the other day, where we fired up o1 Pro and Claude Code with the same refactor. o1 Pro one-shotted it, while Claude Code took a few iterations.
Interestingly enough, the _thinking_ time of o1 Pro led us to just commit the Claude Code changes, as they were both finished in around the same time (1 min 37s vs. 2+ minutes), however we did end up using some feedback from o1 to fix an issue Claude hadn't caught. YMMV
Probably the main value engineers have for a maintenance project is context. I wonder what happens when we fully cede context to the machines...
Today, I got a request at work for a feature ("let's offer coupons!") that I thought would take a week. That was until I found out that another engineer wrote most of the code last year, and it'd take him a day to dust off.
I'm totally onboard with, and grateful for, larger-scale experiments like this...thanks for putting the effort in. I wonder how well Cursor (or similar) would handle a situation in which large amounts of code are _almost_ being used. What if 3k LOC accidentally get duplicated? Can our automated systems understand that and fix it? Because if they can't, a human is going to spend a _long_ time trying to figure out what happened.
Over the next 18 months, I expect we'll hear a few stories of the LLM accidentally reimplementing an entire feature in a separate code path. It's a whole new class of bugs! :D
I think in the end AI will be more advanced tool, but a tool nonetheless. Like methodologies and principles, good practises etc. - they only work if you use it with care and added thought and adaptation to your case. DRY it a great principle. But sometimes it's better if you repeat yourself. For one reason or another. And these are the the tradeoffs that human in the loop should be making imho.
I agree. When I read these articles on vibe coding I can't help to think that these guys are basking in the glory of the impressive maze they built around themselves. Of course running these things in production and having them reach the state of legacy code is an entirely different thing. Building a maze is one thing, having to run around it is an entirely different challenge.
It's like one of those world expos: everything looks fantastic, but the moment the event ends everything just crumbles.
Does it mean it use this expensive open ai audio model in the app? Don't you worry this will make it bankrupt if app goes viral and not monetised?
Can you share what's your strategy here, like topup $2000 open ai account as kind of marketing expenses for users to try for free? Genuine questions since planning to use openai audio API in other case and this kind of expensive price worry me a lot even if switching to new mini-transcribe and mini-tts
https://www.recipeninja.ai/recipe/r_XbZvrH23kS6FwN/werewolf-...
Or, was this mostly just an exercise in engineering/testing AI?
A second, minor problem of your website is that the images illustrating recipes are AI generated with a bad quality
You can't solve those issues by throwing more AI.. well maybe the second problem you can (AI images with later models are generally ok)
https://www.recipeninja.ai/recipe/r_0a8wYxMgm1zFSw/white-pow...
LLMs are super useful but currently, the primary use case is teaching, not doing. For this reason, I think ChatGPT is really just as good as an AI enabled editor (or both if you don't mind paying for two subscriptions).
Also vibe code has a parallel feature, while the code is generating, you are also doing live review and correcting it towards right direction, so depending on your experience, the end product can be a bad mess or wonderful piece of creation and maintenance dream.
The issue with seasoned SWE is that, the moment a mistake(or bad pattern) is made, the baby is thrown with bath water.
For a tiered app like the one presented, 35k LOC is not really that impressive if you think about it. A generic react based front end will easily need a large number of LOC due to modular principle of components, various amounts of hooks and tests(nearly makes us 25-40% of LOC). A business layer will also have many layers of abstractions and numerous impl. to move data between layers.
The vibe code shines, when you let it build one block at a time, limit the scope well and focus. Also, 2-3 weeks is a lot of time to write 35k LOC. at start of any new project, LOC generation rate is very high. But in maintenance phase it significantly falls as smaller changes are more common.
I'm just being honest. For my use case, I would be much better off if LLMs could just do everything.
Lots of apps are quite repetitive: for building APIs for example you generate one controller and the ask the app to generate more using the first ones as a pattern. For frontend you do the same for forms or lists.
Tests are often quite good, but I think they were already great even back in the first ChatGPT release.
With this strategy and the fact that some patterns are quite verbose (albeit understandable for an AI or a reader), it is quite easy to get to a big LoC while still maintaining consistency.
> I struggle to find much utility in terms of actually writing code.
I personally feel you need to give up some control and just let the LLM do its thing if you want to use it to help you build. It honestly does a lot of things in a more verbose way and I've come to the conclusion that it is an LLM writing code for another LLM. As long as I can debug it, I'm okay with the code, as I can develop at a pace that is truly unreal.
I finished my "Recent" contexts feature in a half a day, today. Without the LLM, this would have taken me a week I think. I would say 98% of my code in the past few months has been AI generated. You can see a real life work flow here:
https://app.gitsense.com/?chat=eece40e2-6064-46d2-9bf1-d868c...
I truly believe if you provide a LLM with the right context, it can meet your functional specs 90% of the time. Note the emphasis on functional and not necessary style. And if *YOU* architecture your code properly, it should be 100% maintainable.
I do want to make it clear that what I am doing right now is not novel, but I believe most problems are not. If the problem is not well understood, it can be a challenge like my my chat bridge feature. This feature allows you import Git repos for chatting but I will probably need to rewrite 50% of the LLM code since the solution it built is not scalable.
For code? Autocomplete on steroids is the killer-app.
The other things the LLMs give me are prone to be over-engineered/overly verbose code or similar.
I went through a lot of "Why are you also doing $FOO then $BAR? Doesn't seem necessary if we skip them and do $BAZ which will make one or both of those redundant" and it responding "You're right! Lets use $BAZ instead".
And giving them code to make a small change to was pointless - they would often, but not always, make an incidental change far from the point where you asked for the change.
But autocomplete? That works just great and because I've already got context of the code I am writing I can check it in (at most) two seconds and move on.
Depending on the situation this can be invaluable. If you're experienced in the domain you probably know generally what you need to do but you might get a better result by reasoning through the best solution with the constraints and requirements you have. Or maybe you missed something obvious when you write out the full context—which is a required step for getting a good output from the chatbot, and generally isn't a required step if you aren't explaining your approach to someone else and you don't want to be rigorous.
I actually do use ChatGPT for rubber-ducking, but in that context there is no (or very little) code. In a coding context, I've resigned myself to purely autocomplete-on-steroids.
The thing is, in the vibe-coding context (having the LLM write the code for you), I've had atrocious results across all of the popular LLMs.
After seeing how people like Andrej Karparthy used vibe coding to generate applications https://x.com/karpathy/status/1903671737780498883?s=61 I realize that
you need to be clear on what you want the LLM to do break down the tasks and give byte sized tasks to llm to do specific thing and sometimes I had to tell it not go and change random files because it found the need to refactor them.
[0] https://www.recipeninja.ai/recipe/r_WHeXRD7qXHV0Vr/cyanide-i...
https://www.recipeninja.ai/recipe/r_3i5lSfWjUKq05m/crunchy-p...
https://www.recipeninja.ai/recipe/r_N1VSPtXzCJVV3l/diarrhea-...
There's also apparently a hairstyle? https://www.recipeninja.ai/recipes/hairstyle
Being able to type out and immediately execute it directly in the window, and even have your code replaced by the output, is kind of life-changing. It fundamentally changes the way you write code, like the REPL isn't just a quick way to test your code, but a direct helper to test the stuff that you right.
I did a project in Clojure recently, heavily using Conjure, and then my next project was in Rust. Rust has nice Neovim plugins as well, but it still kind of felt like a step backwards; I found myself reaching for the "automatically evaluate" keystrokes that don't exist on Rust.
There are a lot of really weird recipes still on there, including cyanide ice cream.
I've written more about it here: https://simonwillison.net/2025/Mar/19/vibe-coding/ and here: https://simonwillison.net/2025/Mar/23/semantic-diffusion/
"There’s a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It’s possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard."
Who knows what kind of time wasting is on the other side of a link these days either.. Dark pattern cookie pop-ups, subscription pop-ups, intrusive ads, account registration demands, pay walls, etc..
Contrasted to AI assisted coding, where you would give much more detailed prompts with technical specifications, and read over every line to make sure you understand it before accepting a response.
In theory, vibe coding can let someone with very limited technical expertise build complete apps, so understandably a lot of people are excited by it.
In practice, it doesn't seem like we're there yet. But each new step in AI development leads to people trying again, and it's hard to deny that the results are getting better. I think we're at the stage of where AI image generators were a few years ago. Very much in the uncanny valley.
Actually it is booming. In bsky, X and Linkedin, I see another recipe/todo/budget management/profit tracking/SaaS starter template/landing page/people-to-follow-directory etc. being pumped out every single day. Before GenAI, this would be more like one partial feature per user every month, now post GenAI, entire product in weeks or even hours.
I believe, the indie entrepreneurs are making the maximum bang for the bucks with AI codegen compared to any other groups.
I get this search result https://www.recipeninja.ai/search?name=Anti-inflammatory
What browser are you using? Voice mode or typing?
Search is working fine, I just tried https://www.recipeninja.ai/search?name=lasagne
Perhaps this is 1998 again, when you could earn big money on creating a visitor-counter service, or a guest book service.
Perhaps, now is the time for a lot of smaller projects with AI, that will, in a few years, all be blasted off the market by big corporations and change in trends.
Seriously though - vibecoding is great. Even better (or only feasible) as engineers who can dive in when we need to.
My app is iOS and I had never done any Swift. I do have AI generation but that was more of a fun afterthought. The main utility is extracting recipes from the web and having a synced shopping list that I can share with my wife.
Interview with Vibe Coder in 2025 https://youtu.be/JeNS1ZNHQs8?si=kQIVpEBUwK3pNvRw
I think this helps to understand the mindset of a vibe coder better
Also, the photos are some of the most un-appetizing, uncanny valley, shit I've ever seen.
> Eleven Labs API for text-to-speech conversion
> streamio-ffmpeg gem for audio file analysis
> Active Storage integration for audio file management
> OpenAI integration for recipe generation
etc. I'm not saying it's not a fun excersize in 'vibe coding', I'm just very curios the quality of the code that was actually produced. Its my feeling project maintainability is something that will be a big pain point in the future is people are relying on these tools.. 35k LOC for this recipe app smells like that to me at least.
The recipe still exists though: https://www.recipeninja.ai/recipe/r_UptD1AgJYvvXWm/%D0%9B%D1...
The app literally exposes his OpenAI key.
Ah yes, they just don't make cyanide ice cream like grandma used to
But there is some quality in it, I can't argue against that
https://www.recipeninja.ai/recipe/r_SOv9sTmzAz3cg4/uranium-b...
I'm joking NSA.
It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the service, is it reasonable to expect to make money off of this?
I would tell the AI to avoid recipes that contain bodily fluids.
> https://www.recipeninja.ai/recipe/r_vBiLoIJK7qsUc7/cum-panca...
Always floored about the problems people think need fixing. The problem is not that you get your dirty hands on the iPad. The problem is that you want real recipes. You know, things people have actually cooked and found to be good. With real photos of how the result actually looks (instead of what an AI thinks it might look based on the description).
You might be lucky and find these for free someplace on the web. However, those LLMs that "vibecoded" this Rails app for you are now also used to flood the web with garbage recipes, so finding good recipes on the web will become much, much harder than it already is. I browsed through the recipes and could not find a single one that actually looks real, so at the moment, you are just adding to this problem. This is why people still buy physical cooking books. The good ones are made with sturdy, thick paper so that you can get your dirty hands on them. This is what cooking is all about. Only unused cooking books stay clean.
The "problem" people have is: yeah, I want those, but for free. But creating a cooking book is a ton of work and very expensive. You need to first find good, original recipes. Then you need to actually cook each of them to see if they are any good, at least once, sometimes several times to perfect the recipe. And unless you are a photographer as a side-hustle, you need to do this again and hire a professional to make good-looking photos of the result.
Printing the books is also expensive. Good cooking books need to have very good binding so that they can be flattened to show one page for a long time without disintegrating. The paper must be thick and the printing must be able to withstand stains and moisture without becoming unreadable.
Works like a champ. Pictures and everything.
If you want something in the short form video era, I do appreciate Andy Hearnden (andycooks) as he is both concise, consistent and always posts the full recipe in the video descriptions (all too rare).
If you want just recipes that were published in the magazine (that this website is the companion to), you can also filter results to author = Good Food team
https://www.indianhealthyrecipes.com/
Edit, if you’re veg also this site is decent:
* Chef John (foodwishes)
* Brian Lagerstrom
* Adam Ragusea
* Sip N Feast
* Kenji Lopez-Alt
* Spain On A Fork
* Alton Brown (of course!)
I cook every day and love trying new things. There's no reason to pick just one, but if I had to, Chef John is my go-to. I stumbled across him when I was trying out pretty much every birria recipe on the internet, and his (yes, a white dude) is by far my favorite. That pattern seems to repeat with his recipes.
Cucchiaio.it
I believe you can subscribe to the New York Times Cooking section/app without subscribing to the rest of the New York Times.
I used to know people who hated the Times, but still paid for Cooking.
https://www.recipeninja.ai/recipe/r_LZarKW1PMNlSlx/rubbery-l...
All Ingredients
- rubber cement, 2 cups
- water, 1 cup
- lasagne noodles, 1 box
- shredded mozzarella, 2 cups
...
Step 2
Prepare the rubber sauce by mixing 2 cups of rubber cement with 1 cup of water in a saucepan over low heat until thickened.
1. Some people do want the narrative 2. It's not that much of a hardship to scroll
Ex. Above all the fluff is the word "Print" in a yellow box [1] which takes you to a no fluff page [2].
[1]: https://www.seriouseats.com/buttermilk-vanilla-waffles-recip...
[2]: https://www.seriouseats.com/buttermilk-vanilla-waffles-recip...
The cooking book scene has been openly criticized for not actually trying the recipes, even before LLMs were a thing[0]. Regular cooking websites have always been somewhat unusable due to massive ads and fluff text because 1) SEO and 2) recipes are not copyrightable, but the fluff text is.
For quite some time I get my recipes directly from chatgpt, the instructions are very condense, they work quite well, and most importantly: It knows how to substitute ingredients. "My friend is vegan and allergic to heat-resistant soy protein" and it's going to adjust accordingly.
[0]: https://www.matchingfoodandwine.com/news/blog/recipes-that-d...
My experience with LLM recipes is like with pretty much everything else these things generate: usually very mediocre, mixed with glaring errors in between. If you are an experienced cook, you'll be able to manage since you'll recognize the errors.
Books without photos (McGee, Hewitt, Potter, Julia Child) are not interesting to publishers.
I don't buy it, that's going to be a strictly worse experience than trying out the first search result that has photos, is rated 4/5 stars or better and that has a few positive comments. You can always ask an LLM for substitution recommendations separately.
It's rare that I actually cook directly from them -- usually, that'd be big and fancy stuff or stuff I'm very unfamiliar with; in both cases I usually take the time to cross reference whatever the cookbook says with additional resources from the internet.
ChatGPT, on the other hand, I frequently use when or before cooking (and I cook virtually every day).
It's great when I only have a vague idea based on stuff in the fridge; five minutes later I've got a checklist I can reference. If it hallucinates something that I flat out don't think will work or, much more likely, comes up with something that I don't want or cannot do for lack of ingredients or time or whatever, I'll tell it to adjust the recipe and it does.
It's also great when I feed it a couple of existing recipes (from real people) to compare and contrast and integrate and reformat in a way that's most useful to me, e.g. a tabular format, or scaled to a different serving size.
With all that said, the AI based recipe sites don't really do it for me, either. If I want to cook purely AI generated recipes, a chat interface works fine -- and probably better. What I really want is an AI tool that helps me curate my own recipe collection. E.g. I want to ask it "I'd like to make Ramen, how did I do it the last time, what were my notes" and when it's done I want to tell it "ok, this was fine, I decided to double the mirin and next time I'd marinate the eggs longer" and have it update the recipe.
For context, I'm not a cooking geek or virtuoso. I enjoy it to some degree, but mostly it's just about having a nice, nutritious experience, in line with whatever my mood might be. I only ever measure things super accurately when I'm baking things in the bread maker (because it doesn't let you make corrections). For most meals, I wing half the measurements and time estimates.
In my experience, most "human recipes" are just random variations on some baseline. I hate looking for recipes on recipe sites, youtube, etc. There are food bloggers that are exceptions, but usually I'll just end up scrolling for a long time with just frustration to show for it. If I sort of know some of the ingredients I want to use, have some sense of the type of eating experience I'm going for, and I want a bunch of recommendations based on that, regular "human recipe" sites are not the answer.
90% of my new recipes come from ChatGPT, and that ratio is increasing. I just marinated some chicken based on a recipe it magicked for me. I asked for insights on mixing mayo and yoghurt in the same marinade, because I had leftovers of both. It gave me 5 or so diverse recipes, and I just picked the one that best fit my pantry and mood. I also asked it to convert the recipe from volume to weight, not to mention scale it for my specific quantity of chicken, which was super handy.
I find that ChatGPT is great at providing common sense instructions and approximations. It's absolutely awesome at clobbering together a meal from ingredients I tell it I have. I can have an actual dialogue about any of it, get all kinds of recommendations and insights. That's been very useful to me. I'd go as far as to say that recipe generation is one of the easiest real problems for an LLM to solve. Or, at least for the kinds of recipes I use.
I've done my share of recipes from Serious Eats, but they weren't particularly good. I was doing Breton galettes the other week, which are notoriously fiddly to get right. Serious Eats had a huge article about it, interesting insights, but their final recipe sucked, and I was trying to be accurate. Not only I failed to get the consistency right, the wheat-buckwheat ratio was nowhere near what you'd get in France. I say, write researched articles about what makes recipes work. I can read it, I can bounce my LLM off it. If it's a fiddly recipe, I'll have to fiddle with it no matter what. If I can have a conversation with an LLM about principles at work, that's much better to me than a bunch of "human recipes".
Also, I often have questions about alternatives or things I need advice on as I'm preparing food. I'll also look at the recipe a gazillion times to check the instructions, quantities, etc. I'll set and check a timer often too. A voice-assistant is the obvious answer to this, which I'll try at my earliest convenience.
Kudos to the author!
This just shows your lack of experience and curiosity. The art of cooking is a cultural achievement with thousands of years of history, with vast differences between different countries and cultures. Just visit a store (yes, a real one) with a good cooking book section. You might be surprised.
You should be able to do this yourself easily, though.
Edit: and right after this, I run into another AI-related gem: https://artificialintelligencemadesimple.substack.com/p/ai-t...
But yes, websites will now be filled with these low-quality recipes, and some might be outright dangerous. Cyanide custard should ring alarm bells, but using the wrong type of mushroom is equally dangerous and much more challenging to spot.
https://www.recipeninja.ai/recipe/r_iEyaSAKCQlzl4Q/vibes-and...
But then, as with most llm tools, the fun wore off after a few minutes of playing with it.
https://www.recipeninja.ai/recipe/r_dxF7OQ0O3IGXOw/actual-co...
I bet there might be a recipe of a bomb somewhere, too.[1]
April's Fools or not, I think you could get in legal trouble, but IANAL.
[1] Apparently there is: https://www.recipeninja.ai/recipe/r_SOv9sTmzAz3cg4/uranium-b...
Edit: Both "recipes" have been deleted (the URL should tell you what it was about).
Saying that posting a simple recipe for cocaine was probably illegal sounded very "puritan american"
As a European (French) myself, I'm surprised that you feel that way because I remember reading in a chemistry book in high-school how to make explosives and our chemistry teacher took the complete timeslot to respond in details to someone who asked if, like Walter White, they could make meth.
No hard feelings! :)
Whatever the ultimate usefulness of the website is, the point is using it is slick. It works and it works well.
Very nice demo of vibe coding Tom. I appreciate it.
You can call me a snob, but I appreciate some things only if they are the result of work and creativity of humans.
Now updated to include real cum!
Dish: Fek Yerr AI Slop Garbage Plate All Ingredients
motherboard, 1, diced
cpu, 1, diced
olive oil, 2 tbsp
binary code (0s and 1s), 1 cup
soy sauce, 3 tbsp
teriyaki sauce, 2 tbsp
ram chips, 1 cup
microchips, 1/4 cup
led lights, as needed
1. When I click on a recipe from home page it is maintaining the scroll position, so I am not seeing the top of the screen. Is this deliberate?
2. "Recipe Ninja was vibecoded by Tom in San Francisco." Will it increase of decrease trust in your system for users?
3. To remove AI changing random files, I use "Copy relative path" to tell AI which file to change (there is a keyboard shortcut too). Not fully vibe coding but can be useful for precision bug fixing.
Good luck with the project.
I can pop over to Midjourney and be determined not to draw a single line and "sit there laughing" as it draws the Mona Lisa in the style of Salvador Dali but with a turnip instead of a person.
How is this any different? What is ultimately notable about it? Did any of it make you a better programmer?
I'm always deeply impressed when people devote significant chunks of their time to achieving extraordinary results. I'm entirely baffled, however, that there's anything at all interesting about using an AI interface to build an AI interface to connect you to AI slop.
You could have spent 20 hours planting trees or doing some kind of community serivce, and the world would have been a far better place.
What is notable here is that someone is demonstrating that the systems are reaching a quality where this is possible.
> Did any of it make you a better programmer?
By conventional metrics, if the job got done well enough in less time, yes, even if less skill is involved.
It's similar in principle to architecture or interior design.
Posting date: 2025-04-02T01:57:13 1743559033 <-- too late
LoC: 35,000 <-- That's a _lot_
Front page: "Elon Musk Dirty Pants", "Heroin Hashbrowns", "AI Slop Stew", "Sweet Tooth Delight Made with Human Teeth" <-- WTF?
This is a joke, right?
2. Tell it to make you something
3. Get frustrated when it doesn't work
4. Think about how to revise your prompt
5. Repeat from step #2
Http://earthpilot.com/play and then join at AnthonyDavidAdams.com/zoom at 11 for show and tell.
I’m making a non-fiction book writing agent and I’d love to better understand how you used function calling to navigate the website!
Then you handle those function calls in your javascript.
``` if (function_name === 'search_recipes') { const searchParams = new URLSearchParams();
if (args.name) searchParams.set('name', args.name);
if (args.difficulty) searchParams.set('difficulty', formatDifficulty(args.difficulty));
if (args.min_duration) searchParams.set('minDuration', args.min_duration.toString());
if (args.max_duration) searchParams.set('maxDuration', args.max_duration.toString());
if (args.tag) searchParams.set('tag', args.tag);
// Handle ingredients array correctly - the search page expects ingredients[]
if (args.ingredients && args.ingredients.length > 0) {
// Clear any existing ingredients
searchParams.delete('ingredients[]');
// Add each ingredient individually with the correct array notation
args.ingredients.forEach((ingredient: string) => {
searchParams.append('ingredients[]', ingredient);
});
}
const queryString = searchParams.toString();
const url = queryString ? `/search?${queryString}` : '/search';
navigate(url);
return;
}
// start_cooking function
if (function_name === 'start_cooking') {
// First check if we have an onStartCooking callback registered
if (callbacksRef.current.onStartCooking) {
callbacksRef.current.onStartCooking();
return;
}
}
```https://news.ycombinator.com/newsguidelines.html
https://news.ycombinator.com/showhn.html
Please don't attack others or their work like this on this site, regardless of who or what you have a problem with. It's the opposite of the curious, respectful conversation we're looking for, and always has been.
Sane advice: learn to program, put the AI hype/drug aside and do yourself a favor. It's an invaluable lifetime skill knowing to program from scratch and perhaps unassisted coding will be a looked-after skill in the years to come.
The two biggest issues I can think of with this approach is performance and security, with the first one only being a problem if you "make it", and the second one being too often ignored with or without vibe coding.
- can't be maintained over time, gonna need to replace it
- mostly simple/basic furniture
- out of place in any existing environment
- easy to break (cardboard tables!)
Sometimes that's all you need though.
(2) This particular post is an interesting data point in the research of what the current crop of LLM-based tools is capable of. It reads a bit like a Windsurf ad; I would like more details on how the technical side of the development panned out, what were the problems and where, how were they overcome, etc.
(3) The parent comment reads as a somehow funny mix of socialist "anti-greed" agenda and frowning upon the fact of sharing knowledge and experience freely.
Less tongue-in-cheek: there's no word censorship in HN. You can say "kill myself" here.