Recent comments in /f/MachineLearning
cass1o t1_j91eydy wrote
Reply to [D] Please stop by [deleted]
>and no one with working brain will design an ai that is self aware.(use common sense)
Don't trust tech people with few scruples to not try it. Not saying they can do it but if it is an option don't trust them not to try.
goolulusaurs t1_j91ewpt wrote
Reply to comment by Ulfgardleo in [D] Please stop by [deleted]
You are just guessing, cite a scientific paper.
ant9zzzzzzzzzz t1_j91et1r wrote
Reply to comment by crt09 in [D] Is anyone working on ML models that infer and train at the same time? by Cogwheel
CL can also just mean retraining frequently
Deep-Station-1746 t1_j91egc2 wrote
Reply to [D] Please stop by [deleted]
Isn't this kind of high-quantity-low-quality trend inevitable after some threshold popularity of the base topic? Is there any reason to try to fight the inevitable, instead of forming more niche, less popular communities?
Ulfgardleo t1_j91e648 wrote
Reply to comment by goolulusaurs in [D] Please stop by [deleted]
Due to the way their training works, LLM cannot be sentient. It misses all ways to interact with the real world outside of text prediction. it has no way to commit knowledge to memory. It does not have a sense of time or order of events, because it cant remember anything between sessions.
If something cannot be sentient, one does not need to measure it.
[deleted] OP t1_j91dl7k wrote
Reply to comment by Optimal-Asshole in [D] Please stop by [deleted]
[deleted]
kolmiw t1_j91cyf6 wrote
Reply to [D] Please stop by [deleted]
To be fair, a self aware AI would give you an insane academic recognition, so I’m pretty sure that people even with a really well working brain would design one
[deleted] OP t1_j91cwyi wrote
Reply to comment by quichemiata in [D] Please stop by [deleted]
[deleted]
he_who_floats_amogus t1_j91cfcf wrote
Reply to comment by goolulusaurs in [D] Please stop by [deleted]
Not even guessing. When you're guessing, you're making a well defined conjecture concerning one or more possible outcomes. This assertion isn't well defined, which is why it cannot be measured. It's a much lower-order type of statement than a speculative guess.
[deleted] OP t1_j91c7ry wrote
Reply to comment by suflaj in [D] Please stop by [deleted]
[deleted]
planetoryd t1_j91c39y wrote
Reply to comment by [deleted] in [D] Please stop by [deleted]
Why invent a tool. Invent a god. Sentience is the ultimate goal.
"we must control" lol humans just don't have the mental capacity. Where does the superiority even come from.
quichemiata t1_j91bxci wrote
Reply to [D] Please stop by [deleted]
You could recommend an alternative instead of hating on people for asking questions lumping them in with advertisers
r/learnmachinelearning
[deleted] OP t1_j91bw5x wrote
Reply to [D] Please stop by [deleted]
[deleted]
Optimal-Asshole t1_j91boue wrote
Reply to [D] Please stop by [deleted]
Be the change you want to see in the subreddit. Avoid your own low quality posts. Actually post your own high quality research discussions before you complain.
"No one with working brain will design an ai that is self aware.(use common sense)" CITATION NEEDED. Some people would do it on purpose, and it can happen by accident.
[deleted] OP t1_j91blqt wrote
Reply to comment by goolulusaurs in [D] Please stop by [deleted]
[deleted]
[deleted] OP t1_j91b9py wrote
Reply to comment by XecutionStyle in [D] Please stop by [deleted]
[removed]
goolulusaurs t1_j91b7j1 wrote
Reply to [D] Please stop by [deleted]
There is no way to measure sentience so you are literally just guessing. That being said I agree about the low quality blog spam.
Edit: to whoever downvoted me, please cite a specific scientific paper showing how to measure sentience then.
[deleted] OP t1_j91aoqn wrote
Reply to comment by dojoteef in [D] Please stop by [deleted]
[deleted]
[deleted] t1_j91aexp wrote
Reply to comment by merlinsbeers in [D] Quality of posts in this sub going down by MurlocXYZ
[deleted]
XecutionStyle t1_j91aa70 wrote
Reply to [D] Please stop by [deleted]
You don't know the capacity of what you're making until you make it though
suflaj t1_j919wen wrote
Reply to comment by dojoteef in [D] Please stop by [deleted]
This is also a pretty low quality post. Although the gist of it makes sense,
> and no one with working brain will design an ai that is self aware
made the author lose pretty much any credibility. Followed by
> use common sense
make me think OP is actually hypocritical. For some the common sense IS that ChatGPT is sentient.
Whether you design a self-aware AI is not only out of one's control, but self-awareness is not really well-defined by itself. The only reason at this point we do not call ChatGPT self-aware is the AI effect, we need to invent new prerequisites otherwise. The discussions whether it is sentient, why or why not, is an interesting topic regardless of your level of expertise - but we can create a pinned thread for that, similarly to how we have Simple Questions for the exact same purpose of preventing flooding.
Be as it be, I do not believe mods should act aggressively on posts like this and that one. ML is not an exact science for a long time now. Downvote and move on, that's the only thing a redditor does anyways, and the only way you can abide by rule 1, since the alternative is excluding laymen. Ironically, if we did that, OP, as a layman himself, would be excluded.
[deleted] OP t1_j919hir wrote
Reply to [D] Please stop by [deleted]
[removed]
master3243 t1_j918xav wrote
Reply to [D] Please stop by [deleted]
Agreed, I would prefer posts about SOTA research, big/relevant projects, or news.
blablanonymous t1_j917xm2 wrote
Reply to comment by a1_jakesauce_ in [D] What are the worst ethical considerations of large language models? by BronzeArcher
Is that real? I don’t know why I feel like it could be totally fake
blablanonymous t1_j91f20x wrote
Reply to comment by [deleted] in [D] Please stop by [deleted]
Just gotta be stricter at enforcing them IMHO