Recent comments in /f/MachineLearning
F1ckReddit t1_j55jxbk wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
I want to know why this was removed. I saved a screenshot of the post
wintermute93 t1_j55igrj wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
I'm definitely not seeing whatever you're seeing, if I set that video to 144p and full-screen it (1440p monitor) I get an unwatchable mess, not an unwatchable mess that's been upscaled and sharpened like your screenshots.
GijsB t1_j55i559 wrote
Avelina9X OP t1_j55hcfz wrote
Reply to comment by NotARedditUser3 in [D] Did YouTube just add upscaling? by Avelina9X
It's a GTX 1660 Ti in a tablet laptop. No other video platform does this.
Avelina9X OP t1_j55h54h wrote
Reply to comment by IntelArtiGen in [D] Did YouTube just add upscaling? by Avelina9X
I think it's clientside. Which is why I mentioned perhaps its using a GLSL based CNN which is absolutely possible in WebGL2 and I've been experimenting with that sort of tech myself (not for upscaling, but just as a proof of concept CNN in WebGL).
Avelina9X OP t1_j55gyws wrote
Reply to comment by f10101 in [D] Did YouTube just add upscaling? by Avelina9X
Heres the video link https://www.youtube.com/watch?v=yPUGPLAfhTk
But if YouTube are doing A/B testing your hardware/account/IP/region might not be marked for rollout yet.
Apprehensive-Tax-214 OP t1_j55d6uo wrote
Reply to comment by kdr4t3 in [P] Built an at-cost, pay per second, open-source API for Tortoise text-to-speech (best I've heard!) by Apprehensive-Tax-214
Thanks! Looking into it. Sorry about this.
Apprehensive-Tax-214 OP t1_j55d4pz wrote
Reply to comment by Unlikely-Advice-7168 in [P] Built an at-cost, pay per second, open-source API for Tortoise text-to-speech (best I've heard!) by Apprehensive-Tax-214
Thanks! Looking into it. Sorry about this.
NotARedditUser3 t1_j55avp6 wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
It's possible, if you have one of the very newest graphics cards, that it is your very own hardware doing this and not the website. Dunno
IntelArtiGen t1_j55at5j wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
I don't really see how and why they would do it. What's the video? You can check the codec they used with right click > "stats for nerds", the codec should say which algorithm they used to encode/decode the video. Using CNNs client-side for this task would probably be quite cpu/gpu intensive and I doubt they would do it (except perhaps if it's an experiment). And using CNNs server-side wouldn't make sense if it increases the size of data download.
It does look like CNN artifacts.
f10101 t1_j55angk wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
Not necessarily. This kind of thing will also happen if you chain upscaling, quantization, smoothing and sharpening techniques.
What's the video link?
tomvorlostriddle t1_j559v2b wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
Sure that this is not your hardware clientside?
Avelina9X OP t1_j558wqk wrote
Reply to comment by LiquidDinosaurs69 in [D] Did YouTube just add upscaling? by Avelina9X
Im not going crazy, right? Those are absolutely CNN upscaling artefacts.
LiquidDinosaurs69 t1_j558t5l wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
Woah
arararagi_vamp t1_j557ewd wrote
Reply to [D] Simple Questions Thread by AutoModerator
I have built a simple CNN which is able to detect circles on a white background with noise using PyTorch.
Now I wish to extend my network to be able to return the center of the circle as coordinates. The problem is in each data there is a variable number of circles, meaning I would need a variable number of labels for each data. In a CNN however the number of labels remains constant.
How do I work around this problem?
Omnes_mundum_facimus t1_j555aa2 wrote
Reply to comment by tennismlandguitar in [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often? by tennismlandguitar
The short but mostly true conversation we had with legal.
- engineer: so this model was actually developed by our biggest competitor
- lawyer: wtf?????
- engineer: And we used a pretrained checkpoint, again from even bigger competitor
- laywer: wtf??
- engineer: All cool, It was trained on this imagenet thing
- lawyer: And who owns this imagenet thing?
- engineer: ????
- lawyer: And did everybody in this imagenet thing consent to his or her picture being used?
- engineer: ????
- lawyer: what the actual f ?????
- engineer: So I guess we are using our own model trained from our own data then.
notdelet t1_j552geq wrote
Reply to comment by SearchAtlantis in [R] Researchers out there: which are current research directions for tree-based models? by BenXavier
Haha yes, Cynthia Rudin.
jfacowns t1_j550f70 wrote
Reply to [D] Simple Questions Thread by AutoModerator
XGBoost Question around One-Hot Encoding & Get_Dummies in Python
I am working on building a model for NHL (hockey) games and have a spreadsheet with a ton of advanced stats from teams, dates they played and so on.
All of my data in this spreadheet is categorized as a float. I am trying to add in a few columns of categorical data as I feel it could help the model.
The categorical columns have data that determines if the home team or the away team is playing on back to back days.
I am trying to determine here is one-hot encoding is best for this approach or if I'm misunderstanding how it works as a whole.
Here is some code
NHLData = pd.read_excel('C:\\Temp\\NHL_ModelBuilder.xlsx')
data.drop(['HomeTeam', 'AwayTeam','Result'],
axis=1, inplace=True)
NHLData = pd.get_dummies(NHLData, columns= ['B2B_Home', 'B2B_Away'])
Does this make sense? Am i on the right track here?
If i do NHLData.head() I can see the one-hot encoded columns but when I do NHLData.dtypes() I see this:
B2B_Home_0 uint8
B2B_Home_1 uint8
B2B_Away_0 uint8
B2B_Away_1 uint8
Should these not be objects?
_Arsenie_Boca_ t1_j54rrg9 wrote
Reply to comment by Spico197 in [P] paper-hero: Yet Another Paper Search Tool by Spico197
Great, looking forward to trying the space :)
Spico197 OP t1_j54pdgj wrote
Reply to comment by _Arsenie_Boca_ in [P] paper-hero: Yet Another Paper Search Tool by Spico197
Thanks very much for your reply.
I didn't evaluate the query time. This tool doesn't download the whole arxiv dataset, it just calls the official API. So time is up to the web connection. But it wouldn't take a long time to execute a query.
Yes, absolutely! There are some other things to do before making an online demo, e.g. merging the current two-stage searching into one step. I'm working on it. Thanks again for the advice!
BuffaloVsEverybody t1_j54oabi wrote
unsteadytrauma t1_j54nqh3 wrote
Reply to [D] Simple Questions Thread by AutoModerator
Is it possible to run a model like GPT2 or GPT-J on my own computer and use it to rewrite/rephrase and summarize text? Or would that require too much resources for a personal computer? I'm a noob.
Ouitos t1_j54nh7v wrote
Reply to comment by JClub in [R] A simple explanation of Reinforcement Learning from Human Feedback (RLHF) by JClub
Yes, but If you have a ratio of 0.6, you then take the min of 0.6 * R and 0.8 * R, which is ratio * R. In the end, the clip is only effective one way, and the 0.8 lower limit is never used. Or maybe R has a particular property that makes this not as straight forward ?
CuriousCesarr OP t1_j54lfq2 wrote
Reply to comment by NamerNotLiteral in [P] Looking for someone with good NN/ deep learning experience for a paid project by CuriousCesarr
Sorry for the late reply but I had a very busy period. In the end, I found a small Greek ML company that was excited about the project and we entered deeper discussions. I also updated my post to reflect this. Have a great day! :)
Avelina9X OP t1_j55kcjo wrote
Reply to comment by F1ckReddit in [D] Did YouTube just add upscaling? by Avelina9X
Yeah that's really weird. We're documenting google chrome silently adding upscaling. I think it's a really worthwhile discussion for the community to figure out what model its using as well as how they're implementing it in a cross platform, GPU agnostic way that is buttery smooth and doesn't use a tone of resources.