Recent comments in /f/MachineLearning

KPTN25 t1_j94a1y0 wrote

Reply to comment by Metacognitor in [D] Please stop by [deleted]

Nah. Negatives are a lot easier to prove than positives in this case. LLMs aren't able to produce sentience for the same reason a peanut butter sandwich can't produce sentience.

Just because I don't know positively how to achieve eternal youth, doesn't invalidate the fact that I'm quite confident it isn't McDonalds.

3

Old_Scallion2173 OP t1_j945zye wrote

thankyou for taking the time to answer my question. after reading your answer I've come to the conclusion that image segmentation can improve my model, but I am not using it for it's intended purpose, and also the fact that I have a lot of reading to do :). I do wish to ask tho, do you think I should instead focus on fine tuning my model and getting more dataset to improve the model? Maybe I'm getting too optimistic about instance segmentation.

1

TheRealSerdra t1_j944f39 wrote

Reply to comment by Sphere343 in [D] Please stop by [deleted]

What defines sentience? If I ask ChatGPT “what are you” it’ll say it’s ChatGPT, a LLM trained by OpenAI or something to that affect. Does that count as sentience or self awareness?

1

Metacognitor t1_j941yl1 wrote

Reply to comment by KPTN25 in [D] Please stop by [deleted]

My question was more rhetorical, as in, what would be capable of producing sentience? Because I don't believe anyone actually knows, which makes any definitive statements of the nature (like yours above) come across as presumptuous. Just my opinion.

1

Wrandraall t1_j9415hk wrote

Reply to comment by gunshoes in [D] Please stop by [deleted]

And every subreddits has its own plague posts. This is the main flaw of reddit : a pyramidal system where a lot of new subscribers / beginners ask the same and the same questions, without either thinking more than 10 seconds by themselves or searching for it in the sub history

7

blackhole077 t1_j93xnn8 wrote

Since I'm on a mobile device I'll write a shorter answer that hopefully gives you some insight.

From what I've understood of your question, you're wanting to know if bounding boxes would perform worse due to the proximity of cells you wish to detect.

Both methods may struggle with the cells being in close proximity, and instance segmentation may perform better in that regard. However I will reframe the question slightly.

First, there's a reason that object detection and instance segmentation are different methods. The latter is preferred in situations where you need to know the pixels that are considered to be the detected class, which I think is not what you're aiming for.

Second, the annotation process is, of course, more labor intensive when you want segmentation masks. Luckily you should be able to generate bounding boxes from masks easily, but keep it in mind if you're on a tighter schedule.

If you have additional questions please let me know. I wish you luck in your endeavor.

Hope this helps

5

MysteryInc152 OP t1_j93saow wrote

In-context learning is overpowered lol. Really is implicit finetuning.

This is the prompt template

>You are an AI assistant with several tools available to you. The tools are the following:
>
>MATH: This allows you to evaluate mathematical expressions using the math.js library.
>
>NOW: This returns the current date and time. You must not pass any arguments to this tool!
>
>SEARCH: This returns a web search result for the given string argument.
>
>DO NOT USE TOOLS WITHIN TOOLS! KEEP ALL TOOL CALLS SEPARATE FROM EACH OTHER!
>
>User: What is 10 times 14?
>
>Assistant: 10 * 14 is [MATH(10 * 14) -> 140] 140.
>
>User: How old is Donald Trump?
>
>Assistant: Donald Trump is [SEARCH(Donald Trump age) -> 76] 76 years old.
>
>User: What day is it?
>
>Assistant: It is [NOW() -> 13/02/2023, 20:01:35] the 13th of February 2023.
>
>User: Has Avatar the way of water been released yet?
>
>Assistant: [SEARCH(Avatar the way of water release date) -> 22.11.2022] Avatar: The way of water was released on the 22nd of november 2022. Today is [NOW() -> DATE_TODAY] the 13th of February 2023. Therefore, [MATH(2023 > 2022) -> true] it was released last year.
>
>User: Where was the inventor of the light bulb born, and what is the last country his home country was at war with?
>
>Assistant: Thomas Edison, the inventor of the lightbulb, was born in [SEARCH(Thomas Edison birthplace) -> Milan, Ohio] Milan, Ohio. The last country the United States was at war with was [SEARCH(last country US at war with) -> Iraq] Iraq.
>
>User: USER_INPUT
>
>Assistant:

30

TeamRocketsSecretary t1_j93os17 wrote

Reply to comment by kromem in [D] Please stop by [deleted]

Look if you think the dismissals are increasingly obsolete it’s because you don’t understand the underlying tech… autocomplete isn’t autoregression isn’t sentience. Your fake example isn’t even a good one.

To suggest that it’s performing human like processing of emotions because the internal states of a regression model resemble some notion of intermediate mathematical logic is ridiculous especially in light of research showing these autoregressive models struggle with symbolic logic, and if you favor that type of discussion I’m sure there’s a philosophical/ethical/metaphysical focused sub you can have that discussion in. Physics subs suffer from the same problem especially anything quantum/black hole related where non-practitioners ask absolutely insane thought-experiments. That you even think that these dismissals of chatgpt are “parroted” shows your bias and like I said there’s a relevant sub where you can mentally masturbate over that but this sub isn’t it.

10

buyIdris666 t1_j93m0ol wrote

Video will remain unsolved for a while.

LLM came first because the bit rate is lowest. A sentence of text is only a few hundred bits of information.

Now, image generation is getting good. It's still not perfect. The models are larger because there's maybe 100x the information in a high res image than a paragraph of text.

Video is even harder. 30 high res images a second. To make long, coherent, believable videos takes an enormous amount of data and processing power

5

Art_Soul t1_j93jp5o wrote

I think the OP is a bit optimistic when stating that no-one with a working brain will design a self-aware AI. I used to share that optimism, however, over the last couple of years, I have concluded that this optimism is misplaced and probably naive.

The unfortunate reality is that there are countless people who will use technology in adverse ways for financial gain.

AI will be developed that is capable of every type of horrible behaviour. It will be designed to lie, to cheat, and to steal in more and more sophisticated ways. It will be designed to cause maximum harm.

If sentience is reasonably attainable, it will be developed by people who have dreamt up a way to use it to steal from or scam others.

I believe it is inevitable that we will be facing AI that is developed in all the ways we don't want it to be developed, and applied in all the ways we don't want it to be applied.

Naturally, cyber security will adapt and evolve to counter these adverse developments. Good AI will protect us from bad AI. How this will look is anyone's guess.

The assertion that no-one would do something bad, because it would be a bad thing for them to do, isn't made from a reliably broad perspective.

0