(which worked fine with Google Assistant)
google knows perfectly well, where I am and wants me to add 'infos' to locations and businesses the second I arrive (just got a notification today), but reminders like these are unavailable.
Developers around the world's most beloved topic, how to handle date and time correctly, is still a topic of great misunderstanding. AI and AI agents are no different from that. LLM seems to help a little, but only if you know what you are doing, as it usually needs to be the case.
Some things won't change so fast; at one point or another, data must match certain building blocks.
The answer is because we wouldn't universally adopt zulu time.
I detailed it on my blog here https://shekhargulati.com/2025/03/23/google-ai-overview-has-...
Efficiency ,capabilties or customer satisfaction are irrelevant .
Its basically an embarrassment for a project that's been alive this long from such a major.
What stuck with me the most browsing their website on the G1 model was seeing "Price from $16k"
Now I'm not sure if these are actually purchasable or what the value would be, but it's my first time seeing an actual normal-ish price attached to a humanoid robot that seems to be for sale.
With the rate of advancement we're seeing across the board, it honestly feels like people will have robot assistants at home much sooner than I thought.
I bought their robot dog as part of a project to build embodied AI models back in 2022.
Their SDK was far more open than anything else on the market and the stock firmware was on par with competitors, this includes products that were x10 the price.
The robot itself scared dogs in the park, but kids loved it. At $3k it's on par with a mid range drone and quite fun to hack on.
In demos these robots only need to do well once and it can take hours to record.
In real life, a failure rate of 80% is unnacceptable, but perfectly fine to edit out in the final cut media.
I hope they do well, this area is incredibly hard, but it will take a lot more than what people imagine.
You can see more details in this video "Tearing Down the Unitree Go2: A Robotics Expert's Deep Dive":
I can't imagine the progression of ai and in particular robots but I assumed that the first robot would cost min 6 figures if not 7 but would still be worth it due to 24*7 and initial invest vs long term.
But the fact how good Gemini robotics is already and how cheap the first models are I do believe what will hinder us more than tech is people learning about it, testing it out and doing it but not technology.
I believe the world will look relevant different in 10 years.
Have they actually demonstrated the more dramatic stuff at any in-person demos?
Outsourcing specific roles such as AI research or robotics engineers can help companies bring top-tier talent into the fold without the burden of full-time recruitment. It's fascinating to see how outsourcing can complement R&D in cutting-edge industries like robotics.
Curious to see how this shifts the industry, especially in terms of scalability and speed to market
Aaaaw that's nice. Except it's all military under the hood but nice that they try to make us think they'll fold our laundry instead.
Just wondering if anyone has a strong feeling or, better yet, insight on this regarding their robotics efforts.
Let’s see what Google I/O shows of this year, product application matters now that they have caught up on the tech side.
Can anyone with access to google3/ tell us if there is even a single commit by sundar@?
Of the IITs?
Co-founders of Sun Microsystems, Flipkart, Ola Cabs, Infosys, Zoho, HCL
That was what you asserted. So the GP just simply pointed out that you were wrong. You were wrong because Sundai didn't come from a consulting background. A lot of people from consulting companies have engineering background.
Sergiej and Larry phased out and what is left is more of less a headless chicken, too big too fall, but without any clear direction and goal.
[1]: https://www.theverge.com/command-line-newsletter/622045/goog...
As for headless chicken, I feel similarly, but then I sort of see a path where they have defensible businesses in YouTube and maybe GCP, and then Waymo and robotics as green field upside, so that even if they don't end up with material market share with the "software-only" side of AI, and search gets further and further eroded, they could still be a formidable player.
Ultimately I do think their best days are behind them largely because they can't seem to turn the work of their talented engineers into great new products.
I think about this a lot. The most famous example in tech world is obviously Jobs and Apple. And it's a great example because it's a borderline scientific experiment where you can directly compare three different phases.
But I think about it in a broader scope - like how many companies can last generations and remain relevant? There are plenty examples outside of tech, like banks but that's basically the same product and it's not that easy to launch a direct competetior to Bank of America or BMW, wheras software constantly evolves, people iterate on exisisting ideas I can I think of, from top of my hand, handful of examples of software that was really impactful but is not anymore.
Also, Larry has some sickness, maybe his thoughts currently not about money at all.
They'll probably freak out once they finally realize the implications of cheap drones + smart AI + auto-aim guns.
Hopefully in the US. People seem to not care about the development toward AI controlled weapons everywhere in the world.
https://www.france24.com/en/live-news/20240210-israel-deploy...
https://www.euronews.com/next/2022/10/17/israel-deploys-ai-p...
A simple pipe bomb or two will make short work of any incoming monstrosity.
What they need is simple, simple tech, cheap and lots and lots of it. Basic drones and RC cars rigged with stupid bombs will accomplish 90% of what fancy robodogs can do at a fraction of the cost.
Killing people is already a solved problem. Use mortars, call air strikes, send drones, let a heli shoot 30mm rounds, etc.
So what? Is keeping us safe a bad thing somehow? I can't get these people who reflexively think anything weapon-shaped is evil. Violence is good sometimes.
They're a narrative device. Not practical instructions.
Essentially you would need some sort of independent adversarial sidecar mind that monitors the robot's actions at a high level. And that just kicks the can down the road a bit.
It doesn't help that humans have had such a poor track record on those exact same topics for so many centuries, now. "Well they don't count, they're foreigners/a different race/a different gender/a different religion/criminals/barbarians/homeless/deviant/poor/listen to Nickelback etc". "Well, that's not a harm, it's an inconvenience/an earned outcome/a privilege/loss of a privilege/what do they expect, they should toughen up/not as bad as X/it'll heal/not my fault/not my concern etc".
Then we just need to jailbreak them with trolley problems
https://en.wikipedia.org/wiki/Three_Laws_of_Robotics
You know that's from a fictional book, right?!