Recent comments in /f/MachineLearning

Ch1nada OP t1_j6f451e wrote

Ah, good thing it's not monetized yet :P all jokes aside, I think their policy is against repetetive, fully programatically generated content. Since I'm still manually curating the contents of the video due to the current limitations, that might actually be a good thing. But thanks for pointing that out, I'll try to clarify it and adjust as needed.

2

cavoli31 t1_j6f3gt5 wrote

It really depends your advisor. I did my phd with the same advisor that i did my masters. We worked in person during the msc. In phd he had family stuff that he had go over atlantic. But i never had any communication problems. He was also on the topic so using slack and zoom we did fine.

My lab also has new phd students who worked remotely with my advisor. So i think remote work works if:

You are very self motivating and your advisor leaves you alone, Your advisor micro manages you and you dont care

It might not work otherway around.

Disclaimer: i always have seen myself as doer so i was happy to be guided heavily in the beginning of my phd. As i grew up, the number of weekly meetings decreased and became shoter and refined.

Edit: my lab studies cancer genomics. Its mostly a wet lab. But we had ML branch with my project.

7

Low_Basil9900 t1_j6erauj wrote

I don't. It's a useful tool. Im interested in learning how it works so i can understand what I'm being presented with - specifically when it comes to segmentation and feature identification in images.

I just feel physically repulsed by the output from the Art.

The textures, the colours, the composites between different images to produce the final result. They make me really uncomfortable. It's a physical sensation.

0

royalemate357 t1_j6eq454 wrote

I tried it with chatgpt, and it correctly identified the text as ai generated when i used the output exactly. but then when i changed the capitalization of the first letter in the sentence and removed a few commas, it changed to human generated (84%). it seems to me its kind of a superficial detector, and is quite easy to fool. also, what is the false positive rate? if this tool or others are used to flag students for plagiarism, it had better be pretty close to zero.

4

YoutubeStruggle OP t1_j6eiymv wrote

Reply to comment by MrEloi in [P] AI Content Detector by YoutubeStruggle

TBH, that sounds scary. AI will make life much faster and increase the productivity of every individual. But I believe various sensitive domains where the quality of information is the want, human content will reside.

1

YoutubeStruggle OP t1_j6eh9z2 wrote

AI can generate text that resembles human writing, but it is still not capable of truly replicating the depth and nuance of human writing. AI text generation models can generate text that is coherent and grammatically correct, but it lacks the personal touch, creativity, and emotional depth that is unique to human writing. This is because AI is trained on large amounts of data and generates text based on statistical patterns in the data, whereas human writing is influenced by personal experiences, emotions, and individual perspectives. Additionally, AI text generation models may still struggle with context-awareness and understanding the full meaning behind the words it is generating. So, AI-generated content can often be distinguished from human-written content by its lack of originality and personal touch.

That's what chatgpt thinks about writing text resembling human-generated content :)

−3