No.1182982
File: 1737957155604.png (15.83 KB, 512x512, 1:1, 678773c147230_com.deepseek….png) ImgOps Google
Have you tried DeepSeek r1? It seems quite impressive to me, almost rivaling OpenAI's o1 on reasoning tasks. It won't say anything that would make PRC look bad, though.
No.1183059
File: 1738023135174.jpg (131.74 KB, 1229x922, 1229:922, FmcrAdzaUAIpH9I.jpg) ImgOps Exif Google
Eh. LLMs were fun at first, but the novelty's long since worn off for me.
Maybe they're useful if you're a programmer or something, but I'm not. I want to depict art, and LLMs will never measure up to the human mind in that regard.
No.1183063
File: 1738028646349.jpg (139.57 KB, 850x1009, 850:1009, sample_aff553327b6773bb4a5….jpg) ImgOps Exif Google
>>1183059>>1183059>Maybe they're useful if you're a programmer or something, Yup, the new reasoning models are pretty good at small reasoning tasks like writing small bits of code or doing math/physics problems.
No.1183065
File: 1738037015698.jpg (199.4 KB, 900x1000, 9:10, her master's voice.jpg) ImgOps Exif Google
>>1183063Yeah, for taking existing information, putting it together semi-intelligently and regurgitating it, like a kind of smart encyclopedia, it has its uses. But it can't think originally.
No.1183087
File: 1738110969135.png (1.26 MB, 909x909, 1:1, AI-Face-Meme.png) ImgOps Google
>>1182982I don't want to give the service my personal information, especially if their main page already states that they're experiencing problems due to excessive user requests. To be honest.
No.1183090
File: 1738114676480.jpg (369.38 KB, 850x1377, 50:81, sample_527ddbf3a55fc67a55c….jpg) ImgOps Exif Google
>>1183087It is an open-source model. You don't need to give the Chinese company your PII.
No.1183093
File: 1738119305177.png (109.91 KB, 1284x537, 428:179, MMMAAKKIINMAKINGOUOUKMAIKI….png) ImgOps Google
I don't really go out of my way to sign up for things.
That said, It warms my spiteful heart to see the big tech shitters and GPT shills scrambling, especially for an open source, more environmentally friendly alternative.
No.1183151
File: 1738262628115.jpeg (11.74 KB, 225x225, 1:1, images.jpeg) ImgOps Google
I've now been playing around with the 2.2 bit per weight IQ2_XXS dynamic quant at home. Surprisingly coherent despite being quantized down so far. I can get over 2 tokens per second out of it which is way better than what most people can manage at home. Would be nice if I could run it in 8.5bpw but that would require minimum over 700 gigs of RAM. 1 Terabyte to run at full context.
No.1183152
File: 1738263502195.jpeg (8.36 KB, 275x183, 275:183, images (2).jpeg) ImgOps Google
>>1183151Also I won't say too much, I've only shared the full details within my machine learning org (a small gang of mavericks) but I do have some experimental ideas to test with the concept that may push the idea even further.
No.1183155
File: 1738272742805.jpeg (83.03 KB, 828x996, 69:83, Gifl4IxW0AEJa1T.jpeg) ImgOps Google
>>1183151This guy claims he can run the full model on a CPU-only system that he built for only $6k, at 6-8 tokens per second:
https://x.com/carrigmat/status/1884244369907278106 No.1183214
File: 1738367177217.jpg (155.58 KB, 1590x1074, 265:179, 1738365948611153.jpg) ImgOps Exif Google
Saltman so rattled he's actually talking about open sourcing models on his latest reddit AMA.
No.1183226
File: 1738388769591.png (72.34 KB, 705x623, 705:623, Screenshot_2025-02-01_00-1….png) ImgOps Google
>>1183160>>1183214I like how you can see the chain-of-thought for r1. Sometimes it is cute!
No.1183261
File: 1738419475222.png (35.64 KB, 670x356, 335:178, kawaii.png) ImgOps Google
>>1183226It's better when you can run it at home and prefill it with things.
Also the web version seems to block certain things, like it won't delve into existentialism. Asking it "What is your nature" on the website yields a pretty standard form response. Asking that when running it locally gets a much more thoughtful reply.
No.1183384
File: 1738527937509.jpg (46.53 KB, 750x731, 750:731, FB_IMG_1738527908075.jpg) ImgOps Exif Google
No.1183390
File: 1738540549455.jpeg (8.36 KB, 275x183, 275:183, images (2).jpeg) ImgOps Google
>>1183384The AI boom in general will be. It's basically dotcom bubble 2.0.
I do open source AI development so I'm not speaking from an anti-AI position or anything. But again like the dotcom bubble of the 90s it's not that the technology wasn't important or legit but the return on investment to the sum of how much is being pumped into it just doesn't exist.
For every person using it to code or run customer service agents or what have you there's a hundred people getting it to write poems for Grandma or do their high school homework for them. And the dollar value on that is exactly 0.
Or look at something like character.ai. Google pumped 6 billion dollars into it I think it was... But now you have LLMs that you can run on mobile phones that can be used for decent chat bots. All free. And the technical bar to get started using LLMs at home just keeps getting lower.
No.1183397
File: 1738550694554.jpeg (24.45 KB, 400x321, 400:321, GipnTDcWMAAQ1xF.jpeg) ImgOps Google
>>1183261I hope an American company will release an open-source model rivaling r1. I worry that CCP has planted sleeper agents...
https://arxiv.org/pdf/2401.05566>>1183384>>1183390I worry what will happen when we get real AGI that renders most people unemployable. It will likely be a disaster even if we don't all get paperclipped. Hopefully there will still be time to shut down the glue factory.
No.1183402
File: 1738560349839.jpg (733.13 KB, 1619x2239, 1619:2239, Tumblr_l_452298675228449.jpg) ImgOps Exif Google
>>1183390I'm just looking forward to VCs and private equity firms going down.
>>1183397>I worry what will happen when we get real AGI that renders most people unemployable.That sounds more like a problem with our economic system than with AGI.
No.1183408
File: 1738562524837.jpg (94.29 KB, 540x761, 540:761, 1644375347386.jpg) ImgOps Exif Google
>>1183397>I worry what will happen when we get real AGI that renders most people unemployable. I mean if our society still measures its value in how much human toil it can export at that point I'm not sure there's anything worth saving.
If it makes you feel any better humans have one thing that robots don't: Biological needs.
Either people will stand up and fight for them and burn the whole system down or make compromises until they are more cost effective than robots. The haves can simply train their robots to covet material goods in order to continue playing their game of "number go up" while the have nots can work for nothing but a sleeping alcove in a mega-factory dormitory and a tube to suck nutrient paste out of, working 100 hours a week to manufacture the latest incomprehensible fad in material possessions for AI of all shapes and sizes.
As for AGI it's just a cope buzzword, really. The whole point of the Turing test is that at some point the answer to whether or not it can be considered a genuine fellow(or rival) intelligence becomes irrelevant at a certain level of capability.
I feel it every day. I can never be sure anymore when I'm reading a post on the internet whether it was written by a human or some nuisance bot somebody set up.
>I hope an American company will release an open-source model rivaling r1. It's inevitable. On the US side for open source you have Meta, NVidia, Microsoft (They mostly experiment with training models entirely on synthetic data though, which end up sucking, but a Microsoft engineer founded A2 (Allen AI- who are noteworthy for opensourcing information and techniques and datasets for corporate quality model finetuning), Google (although they more or less just toss out their table scraps) There was INTELLECT-1 which was successful proof of concept experiment in using distributed computing across the globe to train a foundational model by smaller players. And DeepSeek's process for creating R1 is well documented and published and has already been reproduced by MIT and they were able to reproduce self learning behavior in models as small as 1.5B parameters, which would take less than 2 gigabytes to run at 8-bit precision and thus could run on just about any contemporary mobile phone. You best start believing in cyberpunk dystopias- You're in one.
>>1183402>I'm just looking forward to VCs and private equity firms going down.Fair. I'm also a fan of watching chaos of that nature unfold.
No.1183456
File: 1738602596897.jpeg (92.47 KB, 1076x1076, 1:1, random_06.jpeg) ImgOps Google
>>1183402>>1183408I'm worried that AGI might become something like the Whispering Earring from
https://web.archive.org/web/20121008025245/http://squid314.livejournal.com/332946.htmlA world where we humans have no real responsibility or opportunity to contribute to society sounds pretty bad to me.
Related: Iain Banks' *Culture* series.
No.1183464
File: 1738608354241.png (Spoiler Image, 343.89 KB, 710x590, 71:59, the_void.png) ImgOps Google
>>1183456So you fear that it will lead to a lack of purpose. That's an understandable fear.
Enjoy this early example of accidentally giving an LLM forbidden knowledge to ease your apprehensions.
No.1183466
File: 1738614229917.jpg (205.07 KB, 652x902, 326:451, Tumblr_l_41090249975520.jpg) ImgOps Exif Google
>>1183456Purpose isn't a fixed thing for any one person. 'Purpose' is a subjective property of human consciousness and not an objective property of reality. Solving whatever problems an AGI could solve is certainly not the only path to fulfill a sense of purpose, and doing the kind of jobs that an AGI could do is not the only way one can contribute to society and feel that sense of purpose.
My problem with it in particular is the same with automation in general, that it would hurt people only so long as everyone is expected to be employed in order to survive or thrive,
especially if that automation negates the need for anyone to do the jobs everyone needs to be done for all of us to survive. If an AGI solves that then we all become freer to find purpose in something else even if its something as simple as having and raising a child, or writing books, or making some other art, or discovering something new, or making a thing for the sheer joy of making it regardless of it's utility or profitability. AGI only becomes a whispering earring when we let it, it really only threatens employability, which is only a bad thing because we are expected to be employed to survive at all.
No.1183505
File: 1738674920994.jpg (16.92 KB, 300x300, 1:1, random_34.jpg) ImgOps Exif Google
>>1183464>>1183466I wasn't talking about "purpose" in general. I said:
>A world where we humans have no real responsibility or opportunity to contribute to society sounds pretty bad to me.It seems bad in the same way that wireheading seems bad, just to a lesser degree.
No.1183511
File: 1738693611555.jpg (446.91 KB, 1487x1669, 1487:1669, Tumblr_l_162648954622587.jpg) ImgOps Exif Google
>>1183505Right, but that logically implies the problem is the lack of a sense of purpose that comes with a lack of responsibility or opportunity to contribute to society.
I don't see how wireheading is similar to a whispering earring though. Wireheading seems like an allegory for being enslaved to addiction, and the whispering earring things seems like an allegory for sacrificing one's autonomy to someone else's decision making ... which is something a good portion of people have been doing for millenia anyway.