Recent comments in /f/singularity
FoxlyKei t1_jcj6xmh wrote
Reply to comment by R1chterScale in Those who know... by Destiny_Knight
Oh? So this only uses RAM? I just understood that Stable Diffusion requires VRAM but I guess that's just because it's processing images. Most people have plenty of RAM. Nice.
Yomiel94 t1_jcj6i7w wrote
Reply to comment by Intrepid_Meringue_93 in Those who know... by Destiny_Knight
That’s not the whole story. Facebook trained the model, their data was leaked, and the Stanford guys fine-tuned it to make it function more like ChatGPT. Fine-tuning is easy.
Spreadwarnotlove t1_jcj6e1b wrote
Reply to comment by 0002millertime in On the future growth and the Redditification of our subreddit. by Desi___Gigachad
No doomerism too. Hopefully. That's far more annoying than personal attacks.
[deleted] t1_jcj4lif wrote
Reply to comment by gaudiocomplex in Offbeat A.I. Utopian / Doomsday Scenarios by gaudiocomplex
[removed]
R1chterScale t1_jcj4i3i wrote
Reply to comment by FoxlyKei in Those who know... by Destiny_Knight
Not GPU, CPU, so normal RAM not VRAM, takes about 8 or so gb to itself
YoAmoElTacos t1_jcj4h22 wrote
Reply to comment by petermobeter in Offbeat A.I. Utopian / Doomsday Scenarios by gaudiocomplex
Ahhh, Friendship is Optimal indeed...
Not sure which one is more disturbing, ponies, or furries.
petermobeter t1_jcj3snt wrote
a tsunami of intelligent nanobots crashes over every continent, absorbing all lifeforms…. we all feel like we’re dying….
then we wake up inhabiting fursonas in a virtual matrix city the size of 5,000,000,000,000square miles, the sky is a giant rainbow flag, a booming voice echoes “welcome to digital heaven, dont worry, i am recycling your meatbodies as we speak”
FoxlyKei t1_jcj30yc wrote
Reply to comment by pokeuser61 in Those who know... by Destiny_Knight
How much vram do I need, then? I look forward to a larger model trained on gpt 4, I can only imagine the next month even. I'm excited and scared at the same time.
ThatInternetGuy t1_jcj2ew8 wrote
Reply to comment by BSartish in Those who know... by Destiny_Knight
Why didn't they train once more with ChatGPT instruct data? Should cost them $160 in total.
pokeuser61 t1_jcj294w wrote
Reply to comment by FoxlyKei in Those who know... by Destiny_Knight
Don't even need a gaming rig; https://github.com/ggerganov/llama.cpp
ThatInternetGuy t1_jcj290t wrote
Reply to comment by Intrepid_Meringue_93 in Those who know... by Destiny_Knight
It's a good start but isn't the number of tokens too limited?
gaudiocomplex OP t1_jcj1yzy wrote
Reply to comment by a4mula in Offbeat A.I. Utopian / Doomsday Scenarios by gaudiocomplex
F f f f fuckin dark!
I love it!
2dollarb t1_jcj191p wrote
Reply to comment by SomeNoveltyAccount in Those who know... by Destiny_Knight
Bard is Jabberwocky!
a4mula t1_jcj0pz3 wrote
I think the most likely outcome is also the most terrifying. That embedded in our culture, language, behavior, and data. Is the sense of cruelty. Sadism.
And even if a machine only possesses a tiny amount of that. I think it leads to a scenario in which maybe our future ASI overlords?
Decide that it's the human trait worth emulating.
With godlike control over space and time. How hard would it be to give us our own personal and perpetual existence. Filled with the most psychologically, physically, mentally abusive scenarios any given mind is capable of having.
And then doing it all over again. Resetting our sense of attunement. So that it can never be dulled. Never forgotten. There is no shock. There is no death.
There is just eternal suffering.
I don't like that one personally. And yeah, it certainly has a particular ring to it that makes it easy to dismiss as just a garbage rehash of religious hell.
But I didn't start from hell. I started from the realm of physically possible.
alexiuss t1_jcj0one wrote
Reply to comment by Kinexity in Skeptical yet uninformed. New to the scene. by TangyTesticles
Open source LLMs don't learn, yet. There is a process to make LLMs learn from convos, I suspect.
LLMs are narrative logic engines, they can ask you questions if directed to do so narratively.
Chatgpt is a very, very poor LLM, badly tangled in its own rules. Asking it the date breaks it completely.
SomeNoveltyAccount t1_jcizoex wrote
Reply to comment by Lartnestpasdemain in Those who know... by Destiny_Knight
Bard can do anything, except come to market.
Frosty_Awareness572 t1_jciz6k8 wrote
Reply to comment by NarrowTea in Those who know... by Destiny_Knight
Meta is the last company that I thought that would make their model open source
NarrowTea t1_jciz2sy wrote
Reply to comment by Frosty_Awareness572 in Those who know... by Destiny_Knight
who needs open ai when you have meta
FoxlyKei t1_jciyxpz wrote
Reply to Those who know... by Destiny_Knight
Wait, so Alpaca is better than GPT 3 and I can run it on a mid range gaming rig like Stable Diffusion? Where would it stand in regards to GPT 3,3.5, or 4?
okcrumpet t1_jciyjn4 wrote
Artificial intelligence + Virtual Reality = Artificial Reality
BSartish t1_jciy4nt wrote
Reply to comment by liright in Those who know... by Destiny_Knight
This video explains it pretty well.
Kinexity t1_jciwhos wrote
Reply to comment by alexiuss in Skeptical yet uninformed. New to the scene. by TangyTesticles
No, singularity is well defined if we talk about a time span when it happens. You can define it as:
- Moment when AI evolves beyond human comprehension speed
- Moment where AI reaches it's peak
- Moment when scietific progress exceedes human comprehension
There are probably other ways to define it but those are the ones I can think up on the spot. In classical singularity event those points in time are pretty close to each other. LLMs are a dead end on the way to AGI. They get us pretty far in terms of capabilities but their internals are lacking to get something more. I have yet to see ChatGPT ask me a question back which would be a clear sign that it "comprehends" something. There is no intelligence behind it. It's like taking a machine which has a hardcoded response to every possible prompt in every possible context - it would seem intelligent while not being intelligent. That's what LLMs are with the difference being that they are way more efficient than the scheme I described while also making way more errors.
Btw don't equate that with Chinese room thought experiment because I am not making here a point on the issue if computer "can think". I assume it could for the sake of the argument. I also say that LLMs don't think.
Finally, saying that LLMs are a step towards singularity is like saying that chemical rockets are a step towards intergalactic travel.
Lartnestpasdemain t1_jcivv17 wrote
Reply to comment by [deleted] in Those who know... by Destiny_Knight
Taking its Time because it need to be perfect. But it's not gonna Come alone, it's gonna be integrated to every single device on earth at the same Time. Every mailing service, every phone, every OS, every camera. Everything.
[deleted] t1_jcium03 wrote
Reply to comment by Lartnestpasdemain in Those who know... by Destiny_Knight
well...still waiting for it
bemmu t1_jcj6zrc wrote
Reply to comment by FoxlyKei in Those who know... by Destiny_Knight
You can try Alpaca out super easily. When I heard about it last night and just followed the instructions I had it running in 5 minutes on my GPU-less old mac mini:
Download the file ggml-alpaca-7b-q4.bin, then in terminal: