Recent comments in /f/singularity

dasnihil t1_jacxq8d wrote

I see the equivalence you see in generative AI and molecular assembly as well. It hurts my head to think that one day we will have assigned as much value on a digitally sculpted/generated cups as we assign on physical cups today. This value shift will probably follow the physical -> digital/hybrid shifting of sentient beings.

To any brain (digital or physical), there's nothing "physical" anyway, we will just live in a different type of physical space where flying over mountains is allowed without the possibility of dying, where base reality's physical shenanigans don't bother us much. We already live in a preview of such a simulation painted by our brain for us. We just want to improve on that simulation eventually :)

10

No_Ninja3309_NoNoYes t1_jacwzfq wrote

Purportedly Twitter has 20M LoC Scala. Scala is a JVM language that is somewhat more concise than Java. IDK how much of that is unit tests, documentation, and acceptance tests. Anyway style, programming language and culture matter. Some coders can be verbose, others just want to get the job done. You can write unreadable code in any language. This is fine for small projects because you can figure out what is going through trial and error. For Twitter it will not work. The bigger the teams the clearer and more defensive you have to code. Defensive code is verbose since you are checking for preconditions that might rarely occur. Some languages are more verbose than others.

But anyway no one codes bottom up. You usually start with a global design and iterate multiple times using mock ups if something is still vague. I don't think your question has an answer right now. Someone has to try it and see what the issues are.

2

OutOfBananaException t1_jacw2ry wrote

Being aligned to humans may help, but a human aligned AGI is hardly 'safe'. We can't imagine what it means to be aligned, given we can't reach mutual consensus between ourselves. If we can't define the problem, how can we hope to engineer a solution for it? Solutions driven by early AGI may be our best hope for favorable outcomes for later more advanced AGI.

If you gave a toddler the power to 'align' all adults to its desires, plus the authority to overrule any decision, would you expect a favorable outcome?

1

TFenrir t1_jacuy0q wrote

I think this is really hard to predict, because there are many different paths forward. What if LLMs get good at writing directly minified code? What if they make their own software language? What happens with new architectures that maybe have something like....RETRO or similar memory stores built in. Heck even current vector stores allow for some really impressive things. There are tons of architectures that could potentially come into play that make the maximum context window of 32k tokens more than enough, or maybe 100k is needed. There was a paper I read a while back that was experimenting with context windows that large.

Also you should look into Google pitchfork, which is the code name for a project Google is working on that is essentially an LLM tied to a codebase, that can iteratively improve it through natural language requests.

My gut is, by this summer we will start to see very interesting small apps built with unique architectures that are LLMs iteratively improving a codebase. I don't know where it will go from there.

1

vivehelpme t1_jacuivl wrote

>40 years till the end of our human society as we know it. Whatever comes next will be so radically different it will be unrecognizable.

400 years ago, one could sit at a wooden outdoor table with a glass of wine, wearing woven textile clothes, and enjoy the warmth of a sunny spring day.

In 40 years I can still do that. Some things change, others don't. I don't need a pair of carbon fiber nanotube smartpants with RGB LEDs that can give me a handjob thank you very much.

1

Lawjarp2 t1_jacugjc wrote

Context won't even matter. No single person wrote all those millions of lines of code. No single person needs to know all of it. Just the functionality of each module and how to use it is enough as context for others to use it and build their own module.

Essentially a 32k or even 8k context would itself be enough. But chatGPT as it is now is not robust.

20

dasnihil t1_jacu838 wrote

comparing applications with "lines of code" is okay for laymen to do but software engineers know the challenge at hand to let an AI model build a chrome like codebase (https://github.com/chromium/chromium).

LLMs are good now, they can do miniscule things on a smaller context. what we need now is a bigger thinking machine that gets the big picture and makes use of LLM and other predictive networks to get things done while being focused on the big picture and bug fixes along the way. "bugs" are not just errors that the super intelligent AI will never make, but also adjustments and adaptations to technological improvements and improved algorithms.

but we can totally do the #lines of code vs tokens ->> LLM thing, it's a fun mental exercise but pointless.

1

vivehelpme t1_jactr0m wrote

Prompt to 3D exist, the rest is just an implementation of chopping up the original text into good prompt snippets and how to get the "style" of the output polished right so it appears consistent and conveys the story.

There's no innovation needed for it, just someone with the knowhow wanting to explore that particular creative arena with access to enough cloud GPUs

1

challengethegods t1_jact6ez wrote

well, the context window is not as limiting as people seem to think. That's basically the range of text it can handle in a single instant - for example if someone asked you a trick question, and the predictable false answer pops into your head immediately - that's what a single call to an LLM is. Once people figure out how to recursively call the LLM inside of a larger system that's keeping track of longterm memory/goals/tools/modalities/etc it will suddenly be a lot smarter, and using that kind of system can have even GPT-3 write entire books.

The problem is, the overarching system has to also be AI and sophisticated enough to compliment the LLM in order to breach into a range where the recursive calls are coherent, and context window is eaten very quickly by reminding it of relevant things, to a point where writing 1 more sentence/line might take the entire context window just to have all the relevant information, or even an additional pass afterwards to check the extra line against another entire block of text... which basically summarizes to say that 8k context window is not 2x as good as 4k context window... it's much higher, because all of the reminders are a flat subtraction.

realworld layman example:
suppose you have $3900/month in costs, and revenue $4000/month =
$100/month you can spend on "something".
now increase to revenue to $8000/month,
suddenly you have 41x as much to spend

8

basilgello t1_jacon0o wrote

Software is architecture defined in code. Minimal common sense reasoning is definitely not enough to write and maintain huge software codebases. And LLMs pass even these reasoning tests with "n-th order of understanding". Writing snippets is one thing, but forward-and reverse-engineering of complex problems is another because the number of possible ways to achieve the same result grows exponentially, but evaluating the optimality of each solution is another task different from what LLM does.

21