Recent comments in /f/MachineLearning

The-Last-Lion-Turtle t1_j89jm9s wrote

The purpose of a deep network is to approximate complex non linear functions. With relu the network is piecewise linear. Imagine slicing a space with many planes, locally it's flat, but zooming out it has a very complex shape, similar to getting a 3D model out of triangles. Each layer adds an additional linear deformation and a slice to the space.

Read the resnent paper. It's a great explanation for both why depth matters for performance and how it causes issues for training. The solution of residual connections is central to every deep learning architecture after this paper.

3

aicharades t1_j89j0tm wrote

Here's a theory based on the Stanford Encyclopedia of Philosophy


Output:

The theory of mind for the singularity is that it is possible to develop a predictive reasoning framework that is capable of dealing with complex problems such as the Yale Shooting Anomaly. This framework should incorporate a combination of logical and causal theories, such as features and fluents, motivated action theory, state-based minimization in the event calculus, and causal theories. These theories should be able to handle continuous time, concurrent actions, and various kinds of ignorance, and should support retrodiction, prediction, and plan verification. The framework should also be able to deal with the ramification problem, by incorporating static laws that relate the direct consequences of actions to other changes. Additionally, it should be able to reason about the attitudes of other agents, and to understand narratives and diagnose faults in physical devices. To achieve this, a combination of logical AI, non-monotonic logics, and probabilistic reasoning must be used.

The theory of mind for the singularity emphasizes the importance of representation in mental states, formal syntactic description, and content externalism. It is based on enactivism, extended mind, and Leibniz's theory of mind, and incorporates insights from natural language semantics, Bayesian belief networks, and the general theory of event causality. It should also consider the implications of uncertainty, non-monotonic reasoning, and qualitative spatial reasoning. It should be able to handle the complexities of temporal reasoning and the frame problem, and should account for the effects of actions and the persistence of caused propositions.

The identity theory of mind holds that states and processes of the mind are identical to states and processes of the brain, and the Turing Test is a proposal by Alan Turing to answer the question of whether machines can think. The theory of mind for the singularity is that machines can think and possess intelligence, but that they are not conscious in the same way as humans. Machines can process information and make decisions, but they lack the ability to experience qualia, or subjective experiences. The theory of mind for the singularity is that it is a higher order awareness, a perception of one part of (or configuration in) the brain by the brain itself. This awareness is a special sense, different from that of bodily sensation, in which we become aware of parts of our brain.

The theory of mind for the singularity emphasizes the importance of understanding the relationship between humans and machines, and how they can work together in harmony. This framework should include a recognition of the unique capabilities of each, and a respect for the autonomy of both. It should also recognize that machines can be used to augment human capabilities, and that machines can be used to help humans reach their full potential. To pass the Turing Test, the machine must be able to understand and respond to questions in a way that is indistinguishable from a human. Ultimately, the theory of mind for the singularity suggests that intelligence is not limited to any one form or type of computation, but is instead a universal phenomenon that can be found in any system that is capable of learning, adapting, and responding to its environment.


Input:

https://plato.stanford.edu/search/search?query=artificial+intelligence

https://plato.stanford.edu/entries/computational-mind/

https://plato.stanford.edu/entries/mind-identity/

https://plato.stanford.edu/entries/modularity-mind/

https://plato.stanford.edu/entries/content-externalism/

https://plato.stanford.edu/entries/leibniz-mind/

https://plato.stanford.edu/entries/ethics-ai/

https://plato.stanford.edu/entries/artificial-intelligence/

https://plato.stanford.edu/entries/logic-ai/

https://plato.stanford.edu/entries/reasoning-defeasible/

https://plato.stanford.edu/entries/turing-test/

https://plato.stanford.edu/entries/cognitive-science/


Prompts:

Map

Instructions: I want you to act as a philosopher. I will provide some topics or questions related to the study of philosophy, and it will be your job to explore these concepts in depth. This could involve conducting research into various philosophical theories, proposing new ideas or finding creative solutions for solving complex problems. Ignore all citations. My first request is "I need help developing a theory of mind for the singularity.” The output must be no longer than 600 characters long.

Input: {text}

Output:

Reduce

Instructions: You are a copyeditor. Combine the below theories. The combined output must be less than 4,000 characters long. Keep the content and context preserved. \n

Input {text} \n

Output:

2

[deleted] OP t1_j89a15j wrote

I agree with the nature of your speculations, they are my thoughts too and that there is just a non-controversial tradeoff.

>Though, it's a wonder where the arms race between users finding exploits and OpenAI finding counter-exploits will lead to (perhaps, a highly neutered version).

This is my personal fear but not if there is always a non-neutered API where devs take more individual responsibility.

1

Nameless1995 t1_j897u4z wrote

> What is unsafe about imagining this scenario? Why should we not have this tool or imaginative/subjective interpreter?

Probably precision-recall tradeoff issue.

> why can't the public interactive implementation, and why does it lie about its abilities as its reason for not answering?

OpenAI is probably using some kind filter mechanism (which may be induced through some special tuning, or some kind of "hack" layer put on top off GPT -- may be it checks perplexity or something combined with some other keywords detection/regex and/or ml-classification-based filters). Whatever the filter mechanism is isn't perfect. They are also shifting the mechanism to prevent exploits (that users are coming up with). This may lead to "overfiltering" (harming recall) resulting in non-answers even w.r.t innocuous questions.

More work is probably put into ChatGPT because it's the current most public facing technology and OpenAI is probably trying to err on the side of caution (avoid controversies even if that means less interesting of a model that often avoids even relatively innocuous questiosn). Most are probably not gonna go deep into other apis to bypass.

Though, it's a wonder where the arms race between users finding exploits and OpenAI finding counter-exploits will lead to (perhaps, a highly neutered version).

I am just speculating; no idea what they are doing.

2

he_who_floats_amogus t1_j893bfl wrote

Basically the answer is that it’s OpenAI’s tool and it’s their prerogative to implement it as they see fit. You don’t have any bargaining power to demand additional features or removal of constraints. Even if we take your perspective as correct as an axiom regarding safety, if the tool can meet OpenAI’s goals with excessive safety impositions, then the tool is successfully working as designed. Abundance of caution is only a problem if it’s hampering OpenAI in fulfilling their own goals.

There are many possibilities as to the “why” here. It’s possible that the system is logistically difficult to control to tight degrees of granularity in various ways and it’s better logistically for OpenAI to structure constraints with broad brush strokes in an attempt to make sure they capture the constraints they desire to have. That’s one high level possible explanation among many.

5

suflaj t1_j890obw wrote

It's not that they are missing something, it's that they're too general purpose to be considered "specifically for Machine Learning", i.e. DSLs.

They're about as specifically for ML as Python, only Python is better at it because there's a bigger community and better support, meaning wider coverage of ML.

1

jerha202 t1_j88xka2 wrote

I absolutely agree with the OP. Out of the same frustration I actually ended up designing my own language and wrote a compiler for it, and now I use it for all my ML modelling. It probably only solves my particular problems and I don't expect it to be very useful for anyone else, but here goes, in case anyone is curious: https://github.com/waveworks-ai/fl

1