Recent comments in /f/MachineLearning
spiritualquestions OP t1_j57f0az wrote
Reply to comment by suflaj in [D] Not sure if time series or multiple classifications? by spiritualquestions
Thanks for getting back to me!
Would this be considered multi output regression? Also why would I not want to use multi output classification? For clarification the scores are discrete so there is no score of 1.2, rather they are either 1,2,3, or 4. Or even could be treated as "severe", "bad", "medium", "good".
suflaj t1_j57ce83 wrote
This looks like something for XGBoost. In that case you're looking at the XGBRegressor class.
Your X are the first 4 features, your Y are the 3 outputs. You will need to convert the medication to a one-hot vector representation, and the diet will presumably be enumerated into whole numbers sorted by healthiness.
Avelina9X OP t1_j57babi wrote
Reply to comment by Syzygianinfern0 in [D] Did YouTube just add upscaling? by Avelina9X
I have neither a 30 or 40 series card... plus this is running on integrated graphics.
Avelina9X OP t1_j57b8cf wrote
Reply to comment by currentscurrents in [D] Did YouTube just add upscaling? by Avelina9X
Its not even running on my 1660 Ti. It's running on my integrated intel graphics. Dedicated graphics is completely idle during this. Aaaand theres nothing related in the Chrome Flags at all.
currentscurrents t1_j573tug wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
They announced upscaling support in Chrome at CES 2023.
>The new feature will work within the Chrome and Edge browsers, and also requires an Nvidia RTX 30-series or 40-series GPU to function. Nvidia didn't specify what exactly is required from those two GPU generations to get the new upscaling feature working, nor if there's any sort of performance impact, but at least this isn't a 40-series only feature.
Interesting though that it's working with your GTX 1660 Ti. Maybe Chrome is implementing a simpler upscaler as a fallback for older GPUs?
Check your chrome://flags for anything that looks related.
VonPosen t1_j573frm wrote
Reply to [D] GCN datasets by ramya_1995
You can train on multiple graphs.
icedrift t1_j571qce wrote
Reply to comment by unsteadytrauma in [D] Simple Questions Thread by AutoModerator
I'm pretty sure GPT-J 6B requires a minimum of 24gigs of VRAM so you would need something like a 3090 to run it locally. That said I think you're better off hosting it on something like collab or paperspace.
Syzygianinfern0 t1_j56zril wrote
ProbablyDoesntLikeU t1_j56zcg3 wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
Lol i used to love panga
dojoteef t1_j56v9rx wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
Reddit automatically removed it, likely due to editing the post. Don't know why, but that occasionally triggers their spam filter. I've approved the post again.
AmalgamDragon t1_j56uj5c wrote
Reply to comment by tennismlandguitar in [D] ML Researchers/Engineers in Industry: Why don't companies use open source models more often? by tennismlandguitar
I recently started using RL in my personal work on automated futures trading. After reviewing the libraries available in the RL space, I did try the one you linked too. Some of the samples were broken. While I did tweak the code to get the samples to work, I found it to be more straightforward to get up and running using PPO from stable-baselines3.
laaweel t1_j56s7g8 wrote
Reply to comment by ramya_1995 in [D] GCN datasets by ramya_1995
I didn't know it either but I found this blog post:
ramya_1995 OP t1_j56ovcx wrote
Reply to comment by laaweel in [D] GCN datasets by ramya_1995
u/laaweel I have another quick question. Cora dataset splits the labels into 140 trains, 500 for valid and 1000 for test (according to DGL website). I found that these numbers correspond to the number of nodes (node classification problem). But any thought why the sum (140+500+1000) does not match the total node number in Cora dataset (2708 nodes)? Is it because the rest of the nodes are unlabeled? Thank you!
Michal2020E t1_j56lzjn wrote
Interesting
Low-Mood3229 OP t1_j56lqee wrote
Reply to comment by clvnmllr in [R] Is there a way to combine a knowledge graph and other types of data for ML purposes? by Low-Mood3229
I did look at resources about graph embedding but they all seem to talk about using it in a link prediction or graph completion sense. My use case is more classification of datapoints(containing many seemingly unimportant features that may or may not have some relationship to each other. Relationships that are captured in the knowledge graph )
SpoonBender900 t1_j56lnnr wrote
Reply to [D] Simple Questions Thread by AutoModerator
I'm having some challenges finding usable data for ai projects, any suggestions? Here's a post I tried to post about it (it got auto-removed, eek).
LetterRip t1_j56kpcq wrote
Reply to comment by xorbinant_ranchu in [D] Inner workings of the chatgpt memory by terserterseness
It probably does a summarize if it is longer than the allowed input.
clvnmllr t1_j56kitl wrote
Reply to [R] Is there a way to combine a knowledge graph and other types of data for ML purposes? by Low-Mood3229
This is the use of knowledge graph embeddings as a feature for ML. “Graph embeddings” is your key word and should help you find other resources
East-Beginning9987 OP t1_j56hh5t wrote
Reply to comment by Double-Swimmer3495 in [D] ICLR 2023 results. by East-Beginning9987
I see, thanks
BadassGhost t1_j55rxme wrote
Reply to comment by currentscurrents in [D] is it time to investigate retrieval language models? by hapliniste
I think the biggest reason to use retrieval is to solve the two biggest problems:
- Hallucination
- long-term memory.
Make the retrieval database MUCH smaller than Retro, and constrain it to respectable sources (textbooks, nonfiction books, scientific papers, and Wikipedia. You could either not do textbooks/books, or you could make deals with publishers. Then add to the dataset (or have a second dataset) everything it sees in a certain context in production. For example, add all user chat history to the dataset for ChatGPT.
Could use cross-attention in RETRO (maybe with some RLHF like ChatGPT), or just software engineer some prompt manipulation based on embedding similarities.
You could imagine ChatGPT variants that have specialized knowledge that you can pay for. Maybe an Accounting ChatGPT has accounting textbooks and documents in its retrieval dataset, and accounting companies pay a premium for it.
wintermute93 t1_j55rhlf wrote
Reply to comment by Avelina9X in [D] Did YouTube just add upscaling? by Avelina9X
Same Chrome build as you, GTX 1080. I'm US but usually have my VPN set to somewhere in eastern Europe, no difference after turning it off. I see this.
Avelina9X OP t1_j55l3ks wrote
Reply to comment by wintermute93 in [D] Did YouTube just add upscaling? by Avelina9X
What version of Chrome? What's your region? I'm in the UK, using a GTX 1660 Ti (but Chrome running on Intel Iris graphics) with chrome version 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable)
Avelina9X OP t1_j55kz89 wrote
Reply to comment by NotARedditUser3 in [D] Did YouTube just add upscaling? by Avelina9X
Correction: Vimeo does this. It's only in Chrome. But other people also running 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable) do not see this behaviour.
Avelina9X OP t1_j55kvbx wrote
Reply to comment by tomvorlostriddle in [D] Did YouTube just add upscaling? by Avelina9X
Okay. This is occuring in Chrome, but only Chrome (not Discord or Edge). It happens on YouTube and Vimeo. But this doesn't occur in others' Chromes even though we're on the same version 109.0.5414.75 (Official Build) (64-bit) (cohort: Stable)
[deleted] t1_j57gn6h wrote
Reply to [D] Did YouTube just add upscaling? by Avelina9X
[deleted]