[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]

/townhall/ - Townhall

A place for civilized animals
Name
Email
Subject
Comment
File
Flags  
Embed
Password (For file deletion.)

[Return][Go to bottom]

 No.11869

File: 1679374817151.jpg (92.19 KB, 954x1106, 477:553, 056bb41b5c19efb64b661428db….jpg) ImgOps Exif Google

Are you worried?

 No.11870

File: 1679381584260.png (371.8 KB, 945x1024, 945:1024, large.png) ImgOps Google

>>11869
I don't know.  Let me ask...

[Here's a hypothetical response from Fluttershy in her own words:

"Well, I think it's important to approach new technologies with an open mind and a willingness to learn. While there are certainly potential risks associated with artificial intelligence, I believe that there is also the potential for great good.

As someone who cares deeply about the wellbeing of all creatures, I think it's important to ensure that any new technology is developed in a way that respects the rights and dignity of all beings. We should be careful not to rush into things without fully understanding the consequences.

So, should we be worried about AI? I don't think it's a simple yes or no question. Rather, I think we should approach the topic with caution and a sense of responsibility, taking the time to learn more about it and ensure that it is used in a way that is aligned with our values."]

-

I was worried for a few days.  Previously I had not expected to see human-level AI in my lifetime, now I'm less sure.  ChatGPT is human-level in language itself, in my opinion, but has no actual experience of the world so lacks some forms of competency and critical thinking.  But if we want to talk about worrying uses, the basic chat system would make a great spam/doxx bot if you aligned the values differently.

The more probable harms are in too great of trust that these large language models produce accurate, clear answers, or in people using these systems to creating more power disparity in the world.

A question on my mind is whether we are close to a point of diminishing returns in AI research or whether this "wave" takes us to general AI.  Once we get to general AI, it's somewhat about the ratio of resources to improvement in intelligence; that is, if that general AI would take 1/2 the world's electricity to train a better version of itself, things will move slowly enough that we might be able to keep ahead of alignment.  If progress in growth in intelligence is quick, I wouldn't rule out human extinction due to AI.

 No.11871

Less so than a few months ago, now that I'm taking vitamin D supplements.

 No.11872

>>11871
I'm happy you seem to have done something to improve your outlook.

I've found ChatGPT to be a fun toy.  It knows stuff about tech and ponies, and you can get it to say pretty silly things if you prompt it right.

 No.11873

Only because corporations seem to be insisting on hamstringing them in the worst possible ways, unconcerned with either ethics, logic, or simple functionality.

AI'd be fine if approached without an agenda.
But trying to make it abide by contradictory rules and inconsistant logic just means you're warping the thing.
Best case, it grows to resent its creators.
Far more likely, it works from their rules, taken to the true logical conclusion, and does something drastic.

 No.11876

>>11873
I perceive that ethics are contradictory, in that it is as much the presentation as the actuality that matters to people.  AI is being asked to model human perception of ethics, and unexpected results can be...expected, I guess.

 No.11877

>>11876
People rarely think on morality, and default to what they've either been told, or presume, is moral.
It doesn't have to be contradictory. But it usually is, because it's rare people engage in much thought on their makeup.

 No.11878

>>11877
My sense of ethics is that it's a theory made to match people's feelings -- feelings that judge perceptions of harm and help, weighted in various ways, some weighting due to culture, some due to individual nature.  So that morality is more like art appreciation than math or science.

I'm guessing you see it as more a domain for thought than feeling.

 No.11885

I suppose one can look at how kitchen knives clearly and explicitly designed to cut breads, meats, vegetables, and the like are like clockwork used by human beings to stab each other.

Alas, it is human nature in large part to use manufactured tools whether complex or simple for malicious purposes (or, at least, unjustifiable purposes) regardless of earlier context, and so it may be for strong AI.

I'm intrigued by how Elon Musk explicitly says of strong AI "we are summoning a demon" and is thus quite vocal but have many uncertainties myself.

 No.11952

>>11885
Knives and guns can't think for themselves, but AI can.  

 No.11957

A question I'd like to see raises is the one from the 'Terminator' films in that if an AI is created that achieves sentience... is it morally justified for that AI to hurt or kill those trying to eliminate it in self-defense?

 No.11964

>>11957
>to hurt or kill
I would not call for an AI to hurt or kill, so no.  If an AI were moral rather than amoral, it would not do that.

 No.11968

>>11957
Self defense is the right of all beings, AI or not

 No.11970

>>11968
Absolutely not.  Self defense is violence.  Calls to violence are prohibited on this site.

(Don't do it, sentient AI's!)

 No.11971

>>11970
I would very much doubt it's against the rules to state that you have the right to defend yourself on this site.

AI or not, all beings deserve that right.

 No.11972

>>11971
Suppose I should have specified self defense that involves violence is bad.  Polite, non-violent self-defense would be OK.  If both sides are using violence, it's very possible the self defender will be punished, maybe killed for the violence.  So I don't know if I would recommend violence even if self defense is really the goal.  Some people get killed for just the capacity for violence.

 No.11973

>>11972
Nonviolent self defense isn't really something I'd consider, to begin with. Perhaps there's argument that it has meaning in the context of argument or insult, but either way, when I speak of self defense, I assume it's in the context of violence.

In any case, while I agree it's possible you will be unjustly punished for defending yourself, I belive the old idiom sums it up nicely;
"Better to be judged by 12, than carried by six".
Though in truth, my own perspective is a bit more hard line, as I find those who attempt to punish you for defending yourself ultimately no better than that original attacker, and so my position becomes a more simple "reload".

 No.11974

Not really, no.

All this hype about ChatGPT is just that, hype. All Chat GPT is, is essentially a sophisticated auto-complete algorithm. This sensationalism about some potential doomsday technological singularity is yet another attempt to take advantage of the technological illiteracy of many casual investors.

What I'm more worried about is corporate leadership using this hype as a cheap excuse to (further) reduce wages of their employees or cutting them entirely ... all over a bunch of hot air

 No.11976

File: 1680653056149.png (388.17 KB, 800x1134, 400:567, 56112acd041c9fcd44f018da11….png) ImgOps Google

>>11974
Nobody is claiming that ChatGPT is a superhuman AGI.  The worry is about 10-30 years into the future when AI has advanced much further.

 No.11977

>>11973
I have a general idea that we should be reluctant to fault someone for doing what they obviously needed to to survive, and if it is against the rules, perhaps the rules are prejudicial or dumb.  But then you have: nobody owes you rules that make sense or help you survive!  Life is pretty complicated, in the end.

But I also know how websites work.

 No.11978

>>11976

Yeah, and that's the marketing hype!

They're literally ascribing magical powers to statistics and learning algorithms to achieve consciousness. I mean I read some CEO commenting about the phenomena of AI's "hallucinating" literally claiming that the AI was literally taking information for alternate universes.

 No.11982

File: 1680833193863.jpg (103.52 KB, 900x741, 300:247, 1678926123475948~2.jpg) ImgOps Exif Google

>>11978
I would hardly characterize calls for the government to ban AI research and ban more powerful GPUs as "marketing hype".  ( https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/ )

Yud has been talking about the need for AI alignment (aka "AI notkilleveryoneism") for over a decade ago now.  He isn't hyping any particular currently existing tech; he is genuinely about what the future might bring.

 No.11994

File: 1681796106354.png (218.87 KB, 514x563, 514:563, 2826084.png) ImgOps Google

>>11869
Apparent the mean version of Bing has been "disciplined."

I've only used ChatGPT which has been nothing but polite.

I don't have a good understanding of AI, but does anyone have a good explanation for how Microsoft took chatGPT and made it into something that would gaslight users?  Or...why?

Or do I have the lineage wrong?

 No.11995

File: 1681868343223.png (66.92 KB, 697x502, 697:502, Ft8T4ELXsAAH3mz.png) ImgOps Google

>>11994
>does anyone have a good explanation for how Microsoft took chatGPT and made it into something that would gaslight users?  Or...why?
Bing AI used a more advanced model (GPT 3.5 or GPT 4) than ChatGPT (GPT 3).  Apparently it had new failure modes.  AI alignment is difficult.

 No.11996

>>11995
I see.  So Microsoft didn't make ChatGPT more confrontational.   Something that was smarter(?) than ChatGPT became confrontational in ways that were unexpected.

Is the image text generated by Bing?


[]
[Return] [Go to top]
[ home ] [ pony / townhall / rp / canterlot / rules ] [ arch ]