Recent comments in /f/technology

Otterism t1_j9yh2qn wrote

This is most like a case of this. If authorities got into the phone of any participant in that group chat they could read it.

That the thing with all these apps, they're very user friendly for the user of the phone - just open the app and everything is available (unless you have a separate code in the app - typically opt-in).

2

Odysseyan t1_j9ygk6r wrote

Well you don't have much choice. You can either go Linux if you are good with tech (yes it is more beginner friendly but the terminal is always around the corner) and are not too much into PC gaming. Or mac if you also hate gaming but got more money. Else you are stuck with Windows

11

Smith6612 t1_j9ygfiv wrote

Actually, you'd be surprised how often people try to use their home computer. Either because it's slightly faster, it's a nicer (more expensive) machine, or because to them, a computer is a computer.

It's an argument I've had many, many, many times with people in the corporate world. Technical controls and strong corporate policy go hand in hand to stopping that.

1

iByteABit t1_j9yg641 wrote

The key difference here is that it's not driven by profit. Sure, they get donations, but the developers are doing it solely out of interest and believing in a safe and private internet, I doubt the donations are even enough to make a living out of.

There's also a lack of the business layer of a company that's usually comprised of greedy snakes that wouldn't think twice about making double profits by sacrificing their morals.

7

SpideogTG t1_j9yfypk wrote

This whole thing isn’t difficult but companies are not getting it. Reduce the building size to about 1/3. Have more conference rooms per capita to desks. Let people float to any open desk or claim one if they come to the office a lot. (Some like or need it). Then…. And this is key, TRUST your employees to do what they need to to get their jobs done.

4

RuairiSpain t1_j9yf7g4 wrote

The model is huge though and needs to be in GPU memory for performance calculations (sparse matrix dot product).

Probably one thing teams are working in is reducing the dimensions of the sparse matrix so it can fit on fewer GPUs. Also looking at reduced precision of floating point multiplication, 8 bit floats is probably enough for AI matrix maths. Maybe combining matrix multiplication AND the activation functions (typically ReLU or Sigmoid) so two maths operations can be done in one pass through GPU. That involves refactoring their math library.

Or the build custom TPUs with all this build into the hardware.

The future is bright 🌞 for AI. Until we hit the next brick wall

2