Recent comments in /f/MachineLearning

_Arsenie_Boca_ OP t1_j7miglb wrote

Thanks, good pointer. I am particularly interested in the different mechanisms how the embeddings might be integrated into LMs. E.g. in PaLI and SimVLM, the external embeddings (here image encodings) are simply treated as token embeddings. Others use modified attention mechanisms to potentially make better use of the information. Are you aware of a work that directly compares multiple integration mechanisms?

1

wonderingandthinking OP t1_j7mbx3l wrote

As a way of being exposed to other players in the field it does matter. Some of the best and most effective examples may be nestled away under someone(s) less known or someone relatively known that just isn’t getting the press that only the most obvious examples are currently getting.

Edit - typo

1

jturp-sc t1_j7mawk7 wrote

Let's just slap what's effectively a reskinned version of ChatGPT in a sidebar is certainly a choice ...

I like how this might be the spark that gets Product Management and UX at-large to finally start understanding how to work with ML-based functionality in their products. However, I think we're going to look back and facepalm at a lot of design decisions we see over the next 6-ish months as companies rush to get something (anything) out the door faster than their competitors.

18

bitemenow999 t1_j7m7ctl wrote

my boss during my internship at FB (now meta) came from academia and was a professor at one of the well-known uni, literally didn't write a single line of code during my 3 months there, all I/we (most of the team) got were scribbled notes written during our weekly meetings on what to implement...

2

dataslacker t1_j7m1xjg wrote

Take it from someone who learned C++ first, start with python. You are actually very unlikely to get an interview in C++. The industry standard is Python. Know your algorithms and data structure well enough to do the intermediate level questions on hackerrank and you’ll be in good shape

2

dataslacker t1_j7m1dho wrote

I’ve been working in ML for 8 years and I’ve never seen or heard of a scientist being hired without at least one coding interview. Never seen someone just “write down an algorithm” and hand it off to an engineer. I would really like to hear where you saw this because it’s no where near my experience at big tech companies.

3

currentscurrents OP t1_j7m04ot wrote

Meh, I think the safety concerns are overblown. It's really more of bad PR for Microsoft than an actual threat.

You can already find out how to make drugs, build a bomb, etc from the internet. The Anarchist Cookbook has been well-known for decades and you can find a pdf with a simple google search.

39

buzzbuzzimafuzz t1_j7lz92s wrote

A quote from the Verge liveblog:

>This is an important part of the presentation, but I just want to note that Microsoft is having to carefully explain how its new search engine will be prevented from helping to plan school shootings.
>
>"Early red teaming showed that the model could help plan attacks" on things like schools. "We don't want to aid in illegal activity." So the model is used to act as a bad actor to test the model itself.

The safety system proposed sounds interesting but given how simple prompt engineering attacks still work on ChatGPT, I'm not feeling optimistic about how well this will work out in the real world.

23

Dr_Love2-14 t1_j7lxp7k wrote

Leader in the space?... It is starting to irk me to see so many articles and discussions about this "AI war" between OpenAI and Google and their respective chatbots. OpenAI's main chatbot is GPT3, Google has LaMDA among many others. One thing for sure, they are both large and perform differently depending on the metric used.

Companies such as Facebook, Google, NVIDA, and Chinese ones like Baidu, ect. all heavily invest in AI research. The contribution of these research scientists nation and worldwide are all noteworthy and build on eachother. Google employs far more research scientists than OpenAI, and the volume of ML publications and impact factor of these publications altogether is therefore greater. Deepmind, an AI research subsidiary of Google, has been a leader in AI research and deep learning for many years.

but to directly answer your question, and for what it's worth, I would say NASA is the leader in space. Honestly your question is vague and poorly defined and you shouldn't equate chatbots to their companies.

32