Recent comments

XPisthebest t1_jegfj5u wrote

Playstation usually promotes a very specific type of indies, basically indie games that feel like AA games with cinematic storytelling. Things like Stray, Kena, Shifu and Tchia are what they prefer to promote while strategy games, RTS, simulators and builders are barely acknowledged.

The thing with indies is that you really need the golf story, into the breach and wargroove type of games to create variety. This lifts up the AA like indies too because they look different from the other indies.

6

civilsavage7 t1_jegfiou wrote

Ugh, you couldnt spare a second to post a pic of the front!

It looks like you and your Gameboy has been through it. I dig the battle damage. You could probably restore it without too much of a hassle, but i think it looks cool as it.

Hope you still fire it up from time to time.

2

MtHoodMan t1_jegfi2b wrote

I would encourage you to read the Operation Downfall wiki, specifically the Japanese defense of the islands section, Operation Ketsugo. I'm not disagreeing with you that the nation was beaten militarily, and they knew it, however they still had considerable military assets at their disposal. Their navy was heavily damaged but still useable for home island defense, and their airforce was still going to be a problem. By the time Okinawa was captured, Japan was already losing. That didn't stop Okinawa from being an absolute bloodbath for the Allied forces.

I would also add that Japan being an island is exactly why it wasn't beaten as badly as Germany. There's no denying that. Just by virtue of the differences of the theaters, Japan losing was never going to be as bad as Germany losing without a fullscale naval invasion.

8

Columbus43219 t1_jegfhzu wrote

If you get older and want a DNR, have it tattooed on your forehead! My mom coded after a long fight with SuperNuclear palsey (Dudley Moore) and they grabbed the wrong chart and revived her.

She lived another six months in a vegetative state.

1

KD_A OP t1_jegfh7i wrote

Great question! I have no idea lol.

More seriously, it depends on what you mean by "compare". CAPPr w/ powerful GPT-3+ models is likely gonna be more accurate. But you need to pay to hit OpenAI endpoints, so it's not a fair comparison IMO.

If you can't pay to hit OpenAI endpoints, then a fairer comparison would be CAPPr + GPT-2—specifically, the smallest one in HuggingFace, or whatever's closest in inference speed to something like bart-large-mnli. But then another issue which pops up is that GPT-2 was not explicitly trained on the NLI/MNLI task in the same way bart-large-mnli was. So I'd need to finetune GPT-2 (small) on MNLI to make a fairer comparison.

If I had a bunch of compute and time, I'd like to benchmark (or find benchmarks) for the following text classification approaches, varying the amount of training data if feasible, and ideally on tasks which are more realistic than SuperGLUE:

  • similarity embeddings
    • S-BERT
    • GPT-3+ (they claim their ada model is quite good)
  • sampling
  • MNLI-trained models
  • CAPPr
1