Hacker Newsnew | past | comments | ask | show | jobs | submit | legel's commentslogin

I've been using Google Workspace for over a decade. Historically, after removing a user from your organization, it was always easy to migrate their data (e.g. Google Drive folders) to your own user. Suddenly, I recently had a nightmare experience where I needed to remove users no longer in my company, but I was unable to save their very important data. Google intentionally removed the feature in order to promote their new "Archive" user feature, where "data is safe" and you pay for it at continued unnecessary, extortionist rates.

Separately, I just had another terrible interaction with my own data through Google Labs Flow. That is the site that is serving the latest Nano Banana image and Veo video generations. My takeaways on the quality and value and issues with those "world models" are for another time. Here I'm pointing out another unique "dark pattern" that product managers seem compelled to apply: "if you want us to save your data in a database, then you have to let us view that data and train models on it". It's ridiculous, either I can have my data automatically deleted and have sessions be completely "dumb", or I can submit my soul for eternity and "allow any reviewer to analyze".

Beware, founders and developers relying on Google. Don't be surprised if you wake up with your data held hostage. Don't be surprised when you realize your intellectual property can either be deleted or stolen, but not simply saved.

Thanks Google.


One could use any number of LLMs on real-world problems.

Why are we still interviewing like its 1999?


Old habits die hard. And engineers are pretty lazy when it comes to interviews, so just throwing the same leetcode problem into coder pad in every interview makes interviews easier for the person doing the interview.


If you want people to interview better, you have to both allocate resources to it, and make it count on perf. It’s not laziness, it’s ROI.


As an interviewer, I ask the same problems because it makes it much easier to compare candidates.


How do you know if one candidate happened to see the problem on leetcode and memorized the solution versus one who struggled but figured it out slower?


It's very easy to tell, but it doesn't make much difference. The best candidates have seen the problems before and don't even try to hide it, they just propose their solution right away.

I try give positive feedback for candidates who didn't know the problem but could make good use of hints, or had the right approach. But unfortunately, it's difficult to pass a Leetcode interview if you haven't seen a similar problem to what is asked before. Most candidates I interview nowadays seem to know all questions.

That's what the company has decided so we have to go along. The positive side is that if you do your part, you have good chances of being hired, even if you disagree with the process.


It doesn’t matter. It’s about looking for candidates who have put in the time for your stupid hazing ritual. It signals on people who are willing to dedicate a lot of time to meaningless endeavors for the sake of employment.

This type of individual is more likely to follow orders and work hard - and most importantly - be like the other employees you hired.


Once upon a time, the "stupid hazing ritual" made sense.

Now it means company is stupid.


Because if you want to hire engineers then you have to ask engineering questions. Claude and GPT and Gemini are super helpful but they're not autonomous coders yet so you need an actual engineer to vet their outcome still.


I happen to have a background at this interface as well, as the founder of DeepEarth and Ecodash.ai. I can tell you that I would greatly value such experience in collaboration, although I am not currently hiring. While having such a specific interdisciplinary niche can feel limiting, I also see it as a potential superpower in excelling in a very important domain. I'll also add that machine learning and other modeling techniques are a great bridge between classical natural sciences and modern tech today, that I would look for in collaborators. More specifically from the earth sciences, "GeoAI" would be a key focus.


Just shared this myself. Very lovely!


Thanks for reporting these metrics and drawing the conclusion of an underlying breakthrough in search.

In his Nobel Prize winning speech, Demis Hassabis ends by discussing how he sees all of intelligence as a big tree-like search process.

https://youtube.com/watch?v=YtPaZsasmNA&t=1218


The one thing I got out of the MIT OpenCourseWare AI course by Patrick Winston was that all of AI could be framed as a problem of search. Interesting to see Demis echo that here.


All of this read and written on a smartphone.

Reversion to the past is not preparation for the future.


Very cool!

The training objective is clever.

The 50+ filters at Ecodash.ai for 90,000 plants came from a custom RAG model on top of 800,000 raw web pages. Because LLM’s are expensive, chunking and semantic search for figuring out what to feed into the LLM for inference is a key part of the pipeline nobody talks about. I think what I did was: run all text through the cheapest OpenAI embeddings API… then, I recall that nearest neighbor vector search wasn’t enough to catch all relevant information, for a given query to be answered by an LLM. So, I remember generating a large number of diverse queries, which mean the same thing (e.g. “plant prefers full sun”, “plant thrives in direct sunlight”, “… requires at least 6 hours of light per day”, …) and then doing nearest neighbor vector search on all queries, and using the statistics to choose what to semantically feed into RAG.


Hey, thanks for unpacking what you did at ecodash.ai.

Did you manually curate the queries that you did LLM query expansion on (generating a large number of diverse queries), or did you simply use the query log?


Have you tried the bm25 + vector search + reranking pipeline for this?


I’m glad Ilya starts the talk with a photo of Quoc Le, who was the lead author of a 2012 paper on scaling neural nets that inspired me to go into deep learning at the time.

His comments are relatively humble and based on public prior work, but it’s clear he’s working on big things today and also has a big imagination.

I’ll also just say that at this point “the cat is out of the bag”, and probably it will be a new generation of leaders — let us all hope they are as humanitarian — who drive the future of AI.


Let us all hope that they will be as humanitarian as they can be, but let's not forget they are still just human beings.


Literally a zero chance that the new generation of leaders of artifical intelligence will be humanitarian.


Math comes from brains.


Obviously the article is challenging the view — scientific or not — that mitochondria are not living.

Side note: previously I was funded by NSF and NASA to study such questions from biophysics and astrobiology.

That said, this was a delightful read. I did not realize or conceive of mitochondria as, like bacteria in our bodies, independent living networks with unique genomes, evolution, and flows of information and energy.

Reading about the health benefits of “external mitochondria” made me think about when I hug my dog: are we exchanging mitochondria, perhaps?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: