Recent comments in /f/MachineLearning
blablanonymous t1_j7b3vcx wrote
Reply to comment by Eggy-Toast in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
That’s a very narrow perspective. Not all technological progress is inherently good. It obviously just depends what you do with it. These new tools have the potential to create extremely useful applications but also to destroy many jobs concentrating wealth even more in the hands of a small population very rapidly. This can have profound effects on this generation and is definitely worth thinking about. Think the socioeconomic mess that big tech brought San Francisco but at a global scale. SF was heaven 20 years ago. Now it’s bell on earth.
GreenOnGray t1_j7b22lh wrote
Reply to comment by edjez in [D] Are large language models dangerous? by spiritus_dei
Imagine you and I each have a super intelligent AI. You ask yours to help you end humanity. I ask mine to help me preserve it. If we both diligently cooperate with our AIs’ advice, what do you think is the outcome?
race2tb t1_j7az723 wrote
Reply to [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
I mean AI will be able to generate their own unique art styles like humans can and copyright it instantly. Copyright is over once these generative models are doing pretty much everything and reasoning their own unique solution. It is time to start thinking about how to restructure society away from human creator to AI creators. I have no idea how the patent office is going to keep up honestly without an AI doing the approvals. I'm pretty sure patents and property rights are going to no longer be functional concepts in a society where AI is producing everything.
Even politicians jobs are going to end up being done by AIs in the end that are just data driven decision makers with some oversight by human validators.
matth0x01 t1_j7ayc9e wrote
Reply to comment by Ggronne in Information Retrieval book recommendations? [D] by Ggronne
Seems that you are more interested on the crawling and ETL side.
Maybe you should look more into Data warehouse or Data lake literatur. Especially the shift in paradigm from ETL (extract, transform, load) to ELT (extract, load, transform) respectively schema-on-read.
schwagggg t1_j7avvo1 wrote
Reply to comment by jimmymvp in [D] Normalizing Flows in 2023? by wellfriedbeans
hey thanks for the reference let me take a look.
HeyLittleTrain t1_j7avkil wrote
Reply to comment by i2mi in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
Your answer seems substantially different than the others.
---AI--- t1_j7au2sj wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
The Chinese room experiment is proof that a Chinese room can be sentient. There's no difference between a Chinese room and a human brain.
> It doesn't consider the context of the problem because it has no context.
I do not know what you mean here, so could you please give a specific example that you think ChatGPT and similar models will never be able to correctly answer.
69BigDickMan420 t1_j7asayi wrote
Reply to comment by po-handz in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Getting downvoted for stating facts
candidhorse4 OP t1_j7ark94 wrote
Reply to comment by who_ate_my_motorbike in What text to speech does this guy use? [R] by candidhorse4
ikr, its almost like some bot makes it lol, it would be interesting to know if anyone know any of those text to speech voices tho
bjergerk1ng t1_j7ar6ps wrote
Reply to comment by Myxomatosiss in [D] Are large language models dangerous? by spiritus_dei
Hi ChatGPT
Dr_Love2-14 t1_j7aqm6x wrote
Reply to comment by ThirdMover in [R] Multimodal Chain-of-Thought Reasoning in Language Models - Amazon Web Services Zhuosheng Zhang et al - Outperforms GPT-3.5 by 16% (75%->91%) and surpasses human performance on ScienceQA while having less than 1B params! by Singularian2501
During model training, I imagine the model would benefit from some form of "self-reflection" at recurrent intervals, similar to human sleep. For a crude workflow, one could design the model to recall through auto-prompting onto a context window everything its learned that is relevant to the newly exposed training data, and then the model makes a rationale decision (following a constant pre-encoded prompt) to restate the information and classify it as factual or non-factual, and then this self-generated text is backpropagated to the model.
(Disclaimer: I follow ML research as a layman)
who_ate_my_motorbike t1_j7akxkl wrote
I don't know what voice is being used but it almost looks entirely algorithmically generated content. Sometimes it doesn't quite understand the video segment so it gets it wrong.
mulokisch t1_j7ak14a wrote
Reply to comment by kaiser_xc in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
Cookies came way before GDPR 🤗
Ggronne OP t1_j7aj3co wrote
Reply to comment by matth0x01 in Information Retrieval book recommendations? [D] by Ggronne
I have written small web scrapers for different applications, but none were based on theory. An upcoming project requires more extensive information retrieval and I would therefore like to get a better foundation.
I will start with Introduction to Information Retrieval, thanks!
I will start with Introduction to Information Retrieval; thanks!
7734128 t1_j7aioqq wrote
Reply to comment by Necessary_Ad_9800 in [N] Microsoft integrates GPT 3.5 into Teams by bikeskata
Our school held some lectures over teams during the pandemic. There's a pop-up each time someone tries to enter a teams meeting, which is annoying in normal cases but disastrous when there's 200+ participants.
Ggronne OP t1_j7aiiwp wrote
Reply to comment by larswl1 in Information Retrieval book recommendations? [D] by Ggronne
I will start with the Introduction to Information Retrieval and look at articles for further knowledge. Thanks!
Eggy-Toast t1_j7agw75 wrote
Reply to comment by blablanonymous in [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
There is not a positive spin to this. The downstream pipeline is ultimately what makes AI beneficial to the common workforce. Complicating runs the risk of creating another bureaucratic gauntlet that’s all but impossible for the average startup to complete.
jimmymvp t1_j7afxx4 wrote
Reply to comment by schwagggg in [D] Normalizing Flows in 2023? by wellfriedbeans
You can perfectly well do the reverse KL with diffusion models, see here:
jimmymvp t1_j7aex0t wrote
Reply to comment by PHEEEEELLLLLEEEEP in [D] Normalizing Flows in 2023? by wellfriedbeans
In theory yes, in practice it's not exact, it's approximated via trace estimator and ODE solver.
jimmymvp t1_j7aend6 wrote
Reply to comment by badabummbadabing in [D] Normalizing Flows in 2023? by wellfriedbeans
Indeed, if your model is bad at modeling the data there's not much use in computing the likelihoods. If you want to just sample images that look cool, you don't care that much about likelihoods. However, there are certain use-cases where we care about exact likelihoods, estimating normalizing constants and providing guarantees for MCMC. Granted, you can always run MCMC with something close to a proposal distribution. However, obtaining nice guarantees on convergence and mixing times (correctness??) is difficult then, I don't know how are you supposed to do this when using a proposal for which you can't evaluate the likelihood. Similarly when you talk about importance sampling, you can only obtain correct weights if you have the correct likelihoods, otherwise it's approximate, not just in the model but also in the estimator.
This is the way I see it at least, but I'll be sure to read the aforementioned paper. I'm also not sure how much having the lower bound hurts you in estimation.
rudboi12 t1_j7aelss wrote
Reply to [N] GitHub CEO on why open source developers should be exempt from the EU’s AI Act by EmbarrassedHelp
EU GDPR laws make sense to me but how does this makes sense? Are they just going to keep banning every new advance in tech. Chatgpt isn’t even that bad or revolutionary.
Cherubin0 t1_j7aegoc wrote
Reply to [D] Are large language models dangerous? by spiritus_dei
All it can do is make your writing much more productive. It can write scams just like you can write scams.
Cherubin0 t1_j7ae83y wrote
Reply to 15 years old and bad at math [D] by Daniel_C_____
You should get a good study routine. Unless you have some form of disability, getting good at math is just about playing with math.
Myxomatosiss t1_j7abejl wrote
Reply to comment by ---AI--- in [D] Are large language models dangerous? by spiritus_dei
That's a fantastic question. ChatGPT is a replication of associative memory with an attention mechanism. That means it has associated strings with other strings based on a massive amount of experience. However, it doesn't contain a buffer that it works through. We have a working space in our heads where we can replay information, ChatGPT does not. In fact, when you pump in an input, it cycles through the associative calculations, comes to an instantaneous answer, and then ceases to function until another call is made. It doesn't consider the context of the problem because it has no context. Any context it has is inherited from its training set. To compare it with the Chinese room experiment, imagine if those reading the output of the Chinese room found it to have some affect. Maybe it has a dry sense of humor, or is a bit of an airhead. That affect would come exclusively from the data set, and not from some bias in the room. I really encourage you to read more about neuroscience if you'd like to learn more. There have been brilliant minds considering intelligence since long before we were born, and every ML accomplishment has been inspired by their work.
[deleted] t1_j7b5wo9 wrote
Reply to [N] "I got access to Google LaMDA, the Chatbot that was so realistic that one Google engineer thought it was conscious. First impressions" by That_Violinist_18
Youre quite late to the party