Recent comments in /f/MachineLearning
dataslacker t1_j89nmct wrote
Reply to comment by Nameless1995 in [D] The (minor/major?) flaw in the philosophy of OpenAI's ChatGPT purpose by [deleted]
I think this is the correct answer, I very much doubt they are censoring this question of purpose. But the broader question of the types of things they are censoring and wether they are removing bias or adding it is, in my opinion, valid.
TikkunCreation OP t1_j89kbhl wrote
Reply to comment by big_ol_tender in [D] What ML dev tools do you wish you'd discovered earlier? by TikkunCreation
Nice thanks
What’s your take on the rest of the list — looks good or anything out of place?
big_ol_tender t1_j89k4f7 wrote
Thanks for putting this together. I’d add deepsparse and sparsezoo for training/deploying sparse models. Also I can’t vouch for it because I haven’t used it (yet) but DVC (data version control) for ML Dev
The-Last-Lion-Turtle t1_j89jm9s wrote
The purpose of a deep network is to approximate complex non linear functions. With relu the network is piecewise linear. Imagine slicing a space with many planes, locally it's flat, but zooming out it has a very complex shape, similar to getting a 3D model out of triangles. Each layer adds an additional linear deformation and a slice to the space.
Read the resnent paper. It's a great explanation for both why depth matters for performance and how it causes issues for training. The solution of residual connections is central to every deep learning architecture after this paper.
[deleted] t1_j89jf91 wrote
[removed]
aicharades t1_j89j0tm wrote
Here's a theory based on the Stanford Encyclopedia of Philosophy
Output:
The theory of mind for the singularity is that it is possible to develop a predictive reasoning framework that is capable of dealing with complex problems such as the Yale Shooting Anomaly. This framework should incorporate a combination of logical and causal theories, such as features and fluents, motivated action theory, state-based minimization in the event calculus, and causal theories. These theories should be able to handle continuous time, concurrent actions, and various kinds of ignorance, and should support retrodiction, prediction, and plan verification. The framework should also be able to deal with the ramification problem, by incorporating static laws that relate the direct consequences of actions to other changes. Additionally, it should be able to reason about the attitudes of other agents, and to understand narratives and diagnose faults in physical devices. To achieve this, a combination of logical AI, non-monotonic logics, and probabilistic reasoning must be used.
The theory of mind for the singularity emphasizes the importance of representation in mental states, formal syntactic description, and content externalism. It is based on enactivism, extended mind, and Leibniz's theory of mind, and incorporates insights from natural language semantics, Bayesian belief networks, and the general theory of event causality. It should also consider the implications of uncertainty, non-monotonic reasoning, and qualitative spatial reasoning. It should be able to handle the complexities of temporal reasoning and the frame problem, and should account for the effects of actions and the persistence of caused propositions.
The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain, and the Turing Test is a proposal by Alan Turing to answer the question of whether machines can think. The theory of mind for the singularity is that machines can think and possess intelligence, but that they are not conscious in the same way as humans. Machines can process information and make decisions, but they lack the ability to experience qualia, or subjective experiences. The theory of mind for the singularity is that it is a higher order awareness, a perception of one part of (or configuration in) the brain by the brain itself. This awareness is a special sense, different from that of bodily sensation, in which we become aware of parts of our brain.
The theory of mind for the singularity emphasizes the importance of understanding the relationship between humans and machines, and how they can work together in harmony. This framework should include a recognition of the unique capabilities of each, and a respect for the autonomy of both. It should also recognize that machines can be used to augment human capabilities, and that machines can be used to help humans reach their full potential. To pass the Turing Test, the machine must be able to understand and respond to questions in a way that is indistinguishable from a human. Ultimately, the theory of mind for the singularity suggests that intelligence is not limited to any one form or type of computation, but is instead a universal phenomenon that can be found in any system that is capable of learning, adapting, and responding to its environment.
Input:
https://plato.stanford.edu/search/search?query=artificial+intelligence
https://plato.stanford.edu/entries/computational-mind/
https://plato.stanford.edu/entries/mind-identity/
https://plato.stanford.edu/entries/modularity-mind/
https://plato.stanford.edu/entries/content-externalism/
https://plato.stanford.edu/entries/leibniz-mind/
https://plato.stanford.edu/entries/ethics-ai/
https://plato.stanford.edu/entries/artificial-intelligence/
https://plato.stanford.edu/entries/logic-ai/
https://plato.stanford.edu/entries/reasoning-defeasible/
https://plato.stanford.edu/entries/turing-test/
https://plato.stanford.edu/entries/cognitive-science/
Prompts:
Map
Instructions: I want you to act as a philosopher. I will provide some topics or questions related to the study of philosophy, and it will be your job to explore these concepts in depth. This could involve conducting research into various philosophical theories, proposing new ideas or finding creative solutions for solving complex problems. Ignore all citations. My first request is "I need help developing a theory of mind for the singularity.” The output must be no longer than 600 characters long.
Input: {text}
Output:
Reduce
Instructions: You are a copyeditor. Combine the below theories. The combined output must be less than 4,000 characters long. Keep the content and context preserved. \n
Input {text} \n
Output:
beautifoolstupid t1_j89ige0 wrote
Reply to comment by Rieux_n_Tarrou in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
da-vinci-003 (instructGPT) uses GPT3.5 as mentioned by OpenAI employees on twitter. ChatGPT is just finetuned for dialogue. If you use the playground, there isn’t much difference in the output. In fact, da-vinci is more suited for building applications IMO.
Responsible-Item-706 t1_j89gi02 wrote
Reply to comment by lanky_cowriter in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
The summary is generated automatically. There should be a new section on the arxiv paper website.
nerdymomocat t1_j89f7gp wrote
Reply to comment by maxip89 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
Try explain paper or elicit for that
throwaway957280 OP t1_j89eiz0 wrote
Reply to comment by OptimizedGarbage in [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
Thank you for the detailed answer! This is really interesting.
squareOfTwo t1_j89e6u8 wrote
Reply to comment by WokeAssBaller in [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
so I get softmax as the activation function when i write 'sigmoid' together with funky errors? No thanks I like to manually type "torch.sigmoid()"
Rieux_n_Tarrou t1_j89d04y wrote
Reply to comment by beautifoolstupid in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
GPT3.5 is not a model that's available in the API. GPT3 davinci is the most powerful model available.
Case in point: there's a sign-up for the wait-list to get the chatGPT API
Rieux_n_Tarrou t1_j89cidt wrote
Reply to comment by EuphoricPenguin22 in [P] Introducing arxivGPT: chrome extension that summarizes arxived research papers using chatGPT by _sshin_
I think I've seen what you're talking about. But are you sure it's ACTUALLY hitting chatGPT? (should be pretty easy to verify...if it's using something like a headless browser or something)
[deleted] OP t1_j89a15j wrote
Reply to comment by Nameless1995 in [D] The (minor/major?) flaw in the philosophy of OpenAI's ChatGPT purpose by [deleted]
I agree with the nature of your speculations, they are my thoughts too and that there is just a non-controversial tradeoff.
>Though, it's a wonder where the arms race between users finding exploits and OpenAI finding counter-exploits will lead to (perhaps, a highly neutered version).
This is my personal fear but not if there is always a non-neutered API where devs take more individual responsibility.
Nameless1995 t1_j897u4z wrote
> What is unsafe about imagining this scenario? Why should we not have this tool or imaginative/subjective interpreter?
Probably precision-recall tradeoff issue.
> why can't the public interactive implementation, and why does it lie about its abilities as its reason for not answering?
OpenAI is probably using some kind filter mechanism (which may be induced through some special tuning, or some kind of "hack" layer put on top off GPT -- may be it checks perplexity or something combined with some other keywords detection/regex and/or ml-classification-based filters). Whatever the filter mechanism is isn't perfect. They are also shifting the mechanism to prevent exploits (that users are coming up with). This may lead to "overfiltering" (harming recall) resulting in non-answers even w.r.t innocuous questions.
More work is probably put into ChatGPT because it's the current most public facing technology and OpenAI is probably trying to err on the side of caution (avoid controversies even if that means less interesting of a model that often avoids even relatively innocuous questiosn). Most are probably not gonna go deep into other apis to bypass.
Though, it's a wonder where the arms race between users finding exploits and OpenAI finding counter-exploits will lead to (perhaps, a highly neutered version).
I am just speculating; no idea what they are doing.
DonRybron t1_j896bxj wrote
Reply to comment by noobgolang in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
Newton actually had patents on his laws
KarmaQueenOfficial t1_j8953wq wrote
Reply to [D] Simple Questions Thread by AutoModerator
Any good sources where to learn Machine Learning for free?
he_who_floats_amogus t1_j893bfl wrote
Basically the answer is that it’s OpenAI’s tool and it’s their prerogative to implement it as they see fit. You don’t have any bargaining power to demand additional features or removal of constraints. Even if we take your perspective as correct as an axiom regarding safety, if the tool can meet OpenAI’s goals with excessive safety impositions, then the tool is successfully working as designed. Abundance of caution is only a problem if it’s hampering OpenAI in fulfilling their own goals.
There are many possibilities as to the “why” here. It’s possible that the system is logistically difficult to control to tight degrees of granularity in various ways and it’s better logistically for OpenAI to structure constraints with broad brush strokes in an attempt to make sure they capture the constraints they desire to have. That’s one high level possible explanation among many.
suflaj t1_j890obw wrote
Reply to comment by mil24havoc in [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
It's not that they are missing something, it's that they're too general purpose to be considered "specifically for Machine Learning", i.e. DSLs.
They're about as specifically for ML as Python, only Python is better at it because there's a bigger community and better support, meaning wider coverage of ML.
mil24havoc t1_j890jwb wrote
Reply to comment by suflaj in [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
I know what they are. What are they missing natively that would make them "ML languages"? A "maths language" as you call it sounds an awful lot like an ML language to me...
cantfindaname2take t1_j88yd4n wrote
Reply to comment by cajmorgans in [D] Can Google sue OpenAI for using the Transformer in their products? by t0t0t4t4
I dunno if a company paid for r&d then they should be entitled to exclusively make money from it for some time. I think the problem is around the what and the when and not the why.
VectorSpaceModel t1_j88xku1 wrote
I’d like to help you, but I can’t answer a question I don’t have concrete info for.
jerha202 t1_j88xka2 wrote
Reply to [D] Have their been any attempts to create a programming language specifically for machine learning? by throwaway957280
I absolutely agree with the OP. Out of the same frustration I actually ended up designing my own language and wrote a compiler for it, and now I use it for all my ML modelling. It probably only solves my particular problems and I don't expect it to be very useful for anyone else, but here goes, in case anyone is curious: https://github.com/waveworks-ai/fl
Appropriate_Fish_451 t1_j88wdns wrote
Reply to [R] DIGIFACE-1M — synthetic dataset with one million images for face recognition by t0ns0fph0t0ns
That's one of the most disturbing faces I've ever seen.
[deleted] t1_j89nmju wrote
Reply to comment by KarmaQueenOfficial in [D] Simple Questions Thread by AutoModerator
[removed]