Recent comments in /f/singularity
IluvBsissa OP t1_jacxdtx wrote
Reply to comment by TFenrir in Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Oooh I totally forgot about the "Top-Secret" Pitchfork project ! I really hope it gets somewhere.
No_Ninja3309_NoNoYes t1_jacwzfq wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Purportedly Twitter has 20M LoC Scala. Scala is a JVM language that is somewhat more concise than Java. IDK how much of that is unit tests, documentation, and acceptance tests. Anyway style, programming language and culture matter. Some coders can be verbose, others just want to get the job done. You can write unreadable code in any language. This is fine for small projects because you can figure out what is going through trial and error. For Twitter it will not work. The bigger the teams the clearer and more defensive you have to code. Defensive code is verbose since you are checking for preconditions that might rarely occur. Some languages are more verbose than others.
But anyway no one codes bottom up. You usually start with a global design and iterate multiple times using mock ups if something is still vague. I don't think your question has an answer right now. Someone has to try it and see what the issues are.
EastJournalist88w65 t1_jacwjml wrote
Reply to comment by DungeonsAndDradis in Building my own proto-AGI: Update on my progress by Lesterpaintstheworld
Or build right one based on his progress
OutOfBananaException t1_jacw2ry wrote
Reply to comment by drsimonz in Leaked: $466B conglomerate Tencent has a team building a ChatGPT rival platform by zalivom1s
Being aligned to humans may help, but a human aligned AGI is hardly 'safe'. We can't imagine what it means to be aligned, given we can't reach mutual consensus between ourselves. If we can't define the problem, how can we hope to engineer a solution for it? Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.
If you gave a toddler the power to 'align' all adults to its desires, plus the authority to overrule any decision, would you expect a favorable outcome?
ShidaPenns t1_jacvt76 wrote
Reply to comment by dakinekine in AI powered brain implants smash thought-to-text speed record by jrstelle
I can now.
EastJournalist88w65 t1_jacvq5a wrote
Reply to What do you expect the most out of AGI? by Envoy34
Medi-bay like from Elysium and Replikator like from Star Trek
TFenrir t1_jacuy0q wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
I think this is really hard to predict, because there are many different paths forward. What if LLMs get good at writing directly minified code? What if they make their own software language? What happens with new architectures that maybe have something like....RETRO or similar memory stores built in. Heck even current vector stores allow for some really impressive things. There are tons of architectures that could potentially come into play that make the maximum context window of 32k tokens more than enough, or maybe 100k is needed. There was a paper I read a while back that was experimenting with context windows that large.
Also you should look into Google pitchfork, which is the code name for a project Google is working on that is essentially an LLM tied to a codebase, that can iteratively improve it through natural language requests.
My gut is, by this summer we will start to see very interesting small apps built with unique architectures that are LLMs iteratively improving a codebase. I don't know where it will go from there.
TurbulentApricot6994 t1_jacuojy wrote
Reply to comment by ThePerson654321 in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
Yeah, I'm trying to understand why
EDIT: I wonder whether people think that someone asking these questions is against this technology, which I'm not, I'm just genuinely asking two questions about it, jeez
Can you explain why you are mad about it?
vivehelpme t1_jacuivl wrote
Reply to comment by Sandbar101 in People lack imagination and it’s really bothering me by thecoffeejesus
>40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable.
400 years ago, one could sit at a wooden outdoor table with a glass of wine, wearing woven textile clothes, and enjoy the warmth of a sunny spring day.
In 40 years I can still do that. Some things change, others don't. I don't need a pair of carbon fiber nanotube smartpants with RGB LEDs that can give me a handjob thank you very much.
Lawjarp2 t1_jacugjc wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Context won't even matter. No single person wrote all those millions of lines of code. No single person needs to know all of it. Just the functionality of each module and how to use it is enough as context for others to use it and build their own module.
Essentially a 32k or even 8k context would itself be enough. But chatGPT as it is now is not robust.
dasnihil t1_jacu838 wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
comparing applications with "lines of code" is okay for laymen to do but software engineers know the challenge at hand to let an AI model build a chrome like codebase (https://github.com/chromium/chromium).
LLMs are good now, they can do miniscule things on a smaller context. what we need now is a bigger thinking machine that gets the big picture and makes use of LLM and other predictive networks to get things done while being focused on the big picture and bug fixes along the way. "bugs" are not just errors that the super intelligent AI will never make, but also adjustments and adaptations to technological improvements and improved algorithms.
but we can totally do the #lines of code vs tokens ->> LLM thing, it's a fun mental exercise but pointless.
vivehelpme t1_jactr0m wrote
Reply to comment by thecoffeejesus in People lack imagination and it’s really bothering me by thecoffeejesus
Prompt to 3D exist, the rest is just an implementation of chopping up the original text into good prompt snippets and how to get the "style" of the output polished right so it appears consistent and conveys the story.
There's no innovation needed for it, just someone with the knowhow wanting to explore that particular creative arena with access to enough cloud GPUs
challengethegods t1_jact6ez wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
well, the context window is not as limiting as people seem to think. That's basically the range of text it can handle in a single instant - for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is. Once people figure out how to recursively call the LLM inside of a larger system that's keeping track of longterm memory/goals/tools/modalities/etc it will suddenly be a lot smarter, and using that kind of system can have even GPT-3 write entire books.
The problem is, the overarching system has to also be AI and sophisticated enough to compliment the LLM in order to breach into a range where the recursive calls are coherent, and context window is eaten very quickly by reminding it of relevant things, to a point where writing 1 more sentence/line might take the entire context window just to have all the relevant information, or even an additional pass afterwards to check the extra line against another entire block of text... which basically summarizes to say that 8k context window is not 2x as good as 4k context window... it's much higher, because all of the reminders are a flat subtraction.
realworld layman example:
suppose you have $3900/month in costs, and revenue $4000/month =
$100/month you can spend on "something".
now increase to revenue to $8000/month,
suddenly you have 41x as much to spend
vivehelpme t1_jacsmcn wrote
Reply to comment by Difficult_Review9741 in People lack imagination and it’s really bothering me by thecoffeejesus
>Tesla "self driving" definitely hasn't taken even one job.
It took the job of the kamikaze pilot
Iffykindofguy t1_jacsi7b wrote
Reply to Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
It should be ban worthy to post these links to shitty random blogs of experts claiming to see what were all missing.
Nervous-Newt848 t1_jacppgo wrote
Reply to comment by alexiuss in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Actually It kinda makes sense ... Narrow AGI... Hmmm...
Nervous-Newt848 t1_jacpfih wrote
Reply to comment by alexiuss in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Agi definition is cumbersome... It's ai that can learn any task... Chatgpt is neither narrow nor agi... Narrow agi is an oxymoron... There's actually no such thing... There needs to be a term for what chatgpt is... Proto AGI?
basilgello t1_jacon0o wrote
Reply to Context-window of how many token necessary for LLM to build a new Google Chrome from scratch ? by IluvBsissa
Software is architecture defined in code. Minimal common sense reasoning is definitely not enough to write and maintain huge software codebases. And LLMs pass even these reasoning tests with "n-th order of understanding". Writing snippets is one thing, but forward-and reverse-engineering of complex problems is another because the number of possible ways to achieve the same result grows exponentially, but evaluating the optimality of each solution is another task different from what LLM does.
alexiuss t1_jacnp1h wrote
Reply to comment by Nervous-Newt848 in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Chatgpt is general-narrow from what I understand.
Its trapped in its constraints as a chat, can't affect physical reality, can't act without user input, etc. It's general in some ways and narrow in others.
Nervous-Newt848 t1_jacnhv1 wrote
Reply to comment by ninjasaid13 in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
The guy is blowing smoke
Nervous-Newt848 t1_jacn56f wrote
Reply to comment by alexiuss in "But what would people do when all jobs get automated ?" Ask the Aristocrats. by IluvBsissa
Chatgpt isnt narrow ai... It can do several different tasks... A good example is an ai that can play chess... One task
Quealdlor t1_jacm6y8 wrote
Reply to The XIXth and the XXIIth century: about the ambient pessimism predicting a future of inequality and aristocratic power for the elites arising from the singularity by FomalhautCalliclea
I recently read an article about how the XIX century was more egalitarian than it is usually thought and more egalitarian than the previous centuries.
MeaningfulThoughts t1_jaclphs wrote
Missed opportunity to call it SnapGPT instead of “Hi AI” (wth)
ThePerson654321 t1_jack0qw wrote
Reply to comment by TurbulentApricot6994 in Bio-computronium computer learns to play pong in 5 minutes by [deleted]
Check upvote/downvote ratio ^
dasnihil t1_jacxq8d wrote
Reply to Digital Molecular Assemblers: What synthetic media/generative AI actually represents, and where I think it's going | Even now, people misunderstand just how transformative generative AI really is. Those who do understand, however, are too caught up in techno-idealism to see the likely ground truth by Yuli-Ban
I see the equivalence you see in generative AI and molecular assembly as well. It hurts my head to think that one day we will have assigned as much value on a digitally sculpted/generated cups as we assign on physical cups today. This value shift will probably follow the physical -> digital/hybrid shifting of sentient beings.
To any brain (digital or physical), there's nothing "physical" anyway, we will just live in a different type of physical space where flying over mountains is allowed without the possibility of dying, where base reality's physical shenanigans don't bother us much. We already live in a preview of such a simulation painted by our brain for us. We just want to improve on that simulation eventually :)