In a windstorm strong enough to set off my car alarm from it shaking my car so hard, lol ;)
It could be worse. Ages ago, I once had a client who wanted to input integers by having the user slide a scrollbar and press "enter". ๐คฆโโ๏ธ
This messaging should be coming *from the US*. :ร
It's hard enough to imagine how this got out of the design stage, but it's even harder to understand how this got past testing. Did they *not do* any user testing?
Agreed. Google totally screwed up here. Just not in the way most people seem to think. They're reacting like "Lol, look at the stupid AI" when 98% of the time it's actually "Look at the AI doing its job right, except the assigned job is really stupid and it never should have been deployed like this"
They surely don't have the compute to pump every one of the ~100k searches per second through a multi-hundred-billion-parameter model to actually evaluate content, and hardcoded site lists aren't the answer, but I suspect they'll start caching heavyweight model evals for later searches.
Lol, literally seconds ago :)
bsky.app/profile/nafn...
This here IMHO is the actual problem with Google's "AI results" RAG summary. They *directly tasked it* to trust top results from an internet search & repeat them with confidence.
It matters to the searcher whether the top result is from e.g. nature.com vs. from theonion.com. But not to the summary.
This here IMHO is the actual problem with Google's "AI results" RAG summary. They *directly tasked it* to trust top results from an internet search & repeat them with confidence.
It matters to the searcher whether the top result is from e.g. nature.com vs. from theonion.com. But not to the summary.
โDonโt trust everything you read on the internet.โ
AI search results: what if I did tho
There may be times when the *summarization* goes awry - bad math errors could be one - but the overwhelming majority of what people have been posting are it just accurately reporting on what the articles people searched for say.
Examples:
It's so weird watching people go crazy over what's basically just "bad answers come up in Google searches", as though this is news just because an AI is summarizing them.
This is not Microsoft Copilot. It's a RAG summarization tool.
Google gets 100k queries per second. They're not pumping them through a model with hundreds of billions of parameters.
It literally lists the articles it's summarizing right under the summary.
That's because that's literally its job. These are not "AI answers", it's RAG summarization (Retrieval Augmented Generation). It's only tasked with taking the top search results and summarizing them.
The above is one of the few cases where there's an actual *error* relative to its task.
Search summarization tool summarizes search top results, details at 11 ๐
I assume that's your birthday? And the family requested no guests on your birthday?
That's just a continuation of a long trend of Google trying to keep you from visiting any site except itself... :ร
The challenge they'll face is that you can make a summarization model *really* lightweight. A model that can do more complex reasoning is inherently heavier-weight, and takes correspondingly more resources to run. And Google gets like 100k search queries per second.
It's pretty simple.
"Why not just have an AI summarize the top several search results to save people time?"
And the AI does that. It's just that the top search results are sometimes garbage. The irony is that the solution is to use an AI that's allowed to do more than just summarize.
Me after the 263rd time in the day that someone mixes up RAG summarization and direct AI answers of queries.
ht/Sigurรฐur Ringsted
It did exactly what it was tasked to do: summarize text.
Throw your complaints at the tasker, not the model.
No. It is a factual, demonstrable definition of how they work. The only thing I left out (for simplicity) is the integration of the attention mechanism between. Linear networks (attention = a sort of ability for the model to query the hidden states of other tokens... need to go into hidden states)
Here's a dissection of a vastly simpler net, with much less superposition, to help you understand.
distill.pub/2020/circuit...
By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks.
distill.pubThink of the most insane flow chart you can imagine, with billions or even trillions of nodes snd connections. Not yes-no answers, but varying degrees of "maybe". Thousands of inputs and outputs per node. And each node being not one concept, but a superposition of concepts.
There's more to the point an *extensive* body of literature dissecting LLMs to understand the nature of how individual devisions are made and chained together.
They're *logic engines*.
In some cases, there's no alternative to memorization (eg. "Recite 'The Raven'"). But most text is novel, not something repeatedly repeated and with a single fixed output (eg. "The odds of a terror attack in Ireland within 10 years committed by a Kenyan is..."). A world model must be assembled.
Words don't exist in a vacuum. To accurately predict words using a parameter space far smaller than the training space & using learning rates of e.g. 1e-5 per token (eg. each individual token has a vanishingly small impact on its own) self-assembly of an underlying model of the world that made them.
They did not.
Look, I literally train LLMs. You have no clue what you're talking about.
It is not "a content aggregator". LLMs are not collagers.
bsky.app/profile/nafn...
What they're looking for is called "on call", and typically involves partial wages for every hour on call and a wage multiplier on worked hours.
Anything else is exploitative.
Exactly. Nobody is out there writing, say, "Australia does exist". At least, not until pranksters go sufficiently viral with "Australia does not exist".
It's just a RAG model tasked only with text summarization.
"Guys", it's literally just tasked with summarizing the top Google search results, and *literally links them*. This isn't "The AI Thinks Something Stupid", it's "The internet said something stupid and the AI tasked with summarizing it is doing so"
I had a tank buried in a filtration medium in an underground stream on a neighbor's land uphill from mine, so it's gravity-fed at full pressure. Delicious, reliable, and low impact. :)
I mean, of course he is, right as western weapons start to arrive.
So let's clarify: this isn't an AI model "answering", it's an AI model summarizing the top Google Search results, and literally listing said sources under the summary.
This is RAG. Nothing else.
The irony is that the way to fix this is to give the AI *more* independence, to evaluate the claims.
I buy flour from commercial suppliers by the dozens of kilos every 1-2 years and use a bread machine roughly weekly. :)
It's RAG. It's just "Google Search with the top results summarized". In this case... you were a top Google Search, congrats :)