• Our new ticketing site is now live! Using either this or the original site (both powered by TrainSplit) helps support the running of the forum with every ticket purchase! Find out more and ask any questions/give us feedback in this thread!

Artificial Intelligence. Are you optimistic about the future uses of AI?

Status
Not open for further replies.
Sponsor Post - registered members do not see these adverts; click here to register, or click here to log in
R

RailUK Forums

GS250

Member
Joined
18 Mar 2019
Messages
1,031
As long as we don't put AI in charge of our nuclear deterrent all will be fine.

Terminator 2 gave us an idea of what could happen. A fictional movie with a chilling real life undertone.
 

nw1

Established Member
Joined
9 Aug 2013
Messages
8,629
AI is the new blockchain for me. We're seeing a lot of nonsense being funded because something might stick, but ChatGPT is incredibly limited (and lies a lot!). It is impressive to use, but once you dig below the surface, you realise that it makes a lot of mistakes.
I tested out ChatGPT's ability to generate code by asking it to code a game of Pacman in a specific language (Python, IIRC).

It didn't. It just generated a very simple game involving a monster chasing the hero around the screen, essentially the "hello world" of gaming.

I then asked it "what about the walls?" (a bit sarcastic in tone, I know) and it duly drew a wall around the playing arena. Maybe I should have used "maze" rather than "walls".

Of course it's possible this is intentional, and it's deliberately designed not to code anything too complex, with all the risks that might entail (would you trust the reliability, and critically, the security, of a more complex AI-generated program? Or, would it encourage students of programming to cheat in assignments?)

Separate post:

My concern is that AI will be used to improve corporations' profits by replacing jobs currently carried out by humans, including skilled jobs.

Like other technological advances, I'd like to see AI focused on uses which genuinely benefit the human race and not just the profits of corporations. I'd hope that it does not lead to a significant reduction in skilled jobs.

As I indicated above, ChatGPT is not going to replace skilled jobs (!) right now, but there are concerns, IMO, that AI in general could replace a range of skilled jobs as it gets more advanced, and lead to large increases in unemployment, and thus, poverty. If this is a real risk, then, to be honest, I think there needs to be regulation put in place to minimise this. Whatever their political persuasion, one might hope that politicians would be concerned enough about the lowered tax revenue and increased benefits bills that would arise from increased unemployment to prevent this happening.

On the other hand, I am fully supportive of uses of AI (such as the medical example given above) which will genuinely make life better. Like many things, it has its pros and cons. It's the responsibility of governments to promote the pros and limit the cons.
 
Last edited:

birchesgreen

Established Member
Joined
18 Aug 2015
Messages
7,196
Location
Birmingham
As long as we don't put AI in charge of our nuclear deterrent all will be fine.

Terminator 2 gave us an idea of what could happen. A fictional movie with a chilling real life undertone.
Don't look up the name of the UK's military communication satellite network... :lol:
 

yorksrob

Veteran Member
Joined
6 Aug 2009
Messages
41,960
Location
Yorks
Look at what happened when they first automated the stock market. The automated stockbrokers got into a selling cycle and the economy collapsed.
 

tomuk

Established Member
Joined
15 May 2010
Messages
2,009
Look at what happened when they first automated the stock market. The automated stockbrokers got into a selling cycle and the economy collapsed.
It was a factor but...

the vast majority of trades at the time of the crash were still executed through a slow process, often requiring multiple telephone calls and interactions between humans.
 

dangie

Established Member
Joined
4 May 2011
Messages
2,236
Location
Rugeley Staffordshire
I’m a member of a photography club. Like most photography clubs we hold regular competitions. Nowadays it is so easy to ‘compose’ a photograph by simply entering a few words of text into a search box. The software then produces the image. Even now it is extremely difficult to judge between a genuine image and a computer generated image. In a very short time it will become virtually impossible.

A national photography competition has already been won by an AI generated image. The ’photographer’ admitted this afterwards saying he entered it to show that AI images could win competitions thus fooling the judges. As a club, how do we police it? Basically we can’t, it has to be trust & honesty.
 

McRhu

Member
Joined
14 Oct 2015
Messages
565
Location
Lanark
I think it will not work out well. What with ‘bad actors’, autonomous killer drones, and the race to be quickest on the draw in whatever field of human endeavour you can think of at the cost of safeguards; we’re doomed. PLUS as well as not rendering hands very well, AI image generators have problems with drawing railway track.
 

Sm5

Member
Joined
21 Oct 2016
Messages
1,013
If we ever get to the stage where AI is so powerful, it will be running on computers that are difficult to reach, either because it's in a remote location that's difficult to get to, or because there are so many levels of security to get through that you won't have a hope in hell of getting anywhere near the off switch.
Let me have a go at translating it…


whilst the cloud is an image of Azure, GCP and AWS it is ultimately a data centre under the hood… A large innocuous building usually given away by the presence of generators to provide back up power, and lots of air conditioners outside.

Theres thousands of them world wide, some hiding in plain sight, like right next door to lakeside shopping centre, in Thurrock.
(some are secretive, others are public, like the one above).

Its how they work thats different…

imagine your running public transport…

if IT was public transport a data centre would be a bus station full of buses, and each bus would have its own route.
Your passengers are the various business services (payroll, finance, hr etc) wanting to use your services.

You need to own your own bus station, or rent or share it. You buy your own buses, and scale the bus to the number of passengers you expect to carry for the next few years. Your responsible for the maintenance and fuel for the buses and drivers, and spares, training etc.
Too many passengers, your bus is packed, slow and runs late, not enough passengers your losing money but your stuck with the bus you bought.
You write the timetable, and handle the complaints.
You risk having a full bus station but no passengers… your still paying the same price for the building, heating, energy and contents Every month.
you dont generally let other buses use your station, except by exception.


Cloud IT would be a transport facilty, say like at a location like next door to Didcot Parkway.

You select the type of car, van, minibus etc you need, you rent it for the time you need it, and return it afterwards.
You dont own the vehicle, nor the maintenance and you size to the demand in front of you at the time.
You dont own a bus station either, indeed you can chose to use services from any region around the world and can change in a few seconds.

More passengers rent a bigger bus, less passengers.. rent a motorbike.
if your passenger is in Sydney, Australia, change bus station to Homebush.
You still write the timetable, and handle the complaints, provide the drivers and the training.
Your sharing with everyone else, but its cleaned and wiped before use.

(what it is not is saas, software as a service… thats a chaffeur driven limo, they show up, charge a set fee per hour for as long as you need them they have their own facilities and your using their specific resources).


What AI brings is a driver….

the ability to use cloud IT to make better decisons for the passengers, like forecasting demand, optimising routes. Whats more if the driver learns a road is closed, it can immediately share that with the other drivers and adjust the route accordingly.


What an AI driver wont do, is decide buses is a bad business and invest in a railway line… it still has guiding parameters… its a driving algorithm it isnt sentient and it cannot think out of the box its in… its still a processor running code, using cpu time, memory and storage. What it will be able to do, with permissions usually one of (read, write, delete, create, update) , is talk to other algorithms, like the train company, the events company, the traffic light system and handle a vast amount if data and make recommendations far faster and with experience, more accurately than a human can.

it isnt infalliable though, many years ago I was in an AI system looking at economics, it reccomended the best way to grow the economy is to have christmas all year round… hard to disagree with that.

where will your ai driver be ?
Well ive worked with super computers in the artic, data centres under a harbour even one in a mountain, ive been in data centres on every continent except antartica..…. But as AI is virtual, stateless and an in memory model it will be in your car and only exist whilst the engine is turned on. Whats unused is stored in memory when you turn it off. When it drives your car it wont get data on the next street corner from a small industrial unit in Brentford… it will get it from the cars around it on the road , it will learn your driving habits, preferred routes and together the optical, temperature, motion sensors onboard together optimise the route and both upload and download updates centrally to/from TomTom etc…

what limits an AI algorithm is that its design purpose, just as a taxi driver isnt instinctively an arms dealer your car ai code will have the same constraints so it definitely wont turn loco and decide to nuke a country.


There is however little stopping someone writing an algorithm to war game, it probably already exists.
If theres UN resolutions on use of AI for peaceful purposes it will rely on honesty of humans to follow it, I find that as likely and as risky as biological weapons convention restraints… theres nothing you nor I can do about it, so theres no point getting worried about it.

== Doublepost prevention - post automatically merged: ==

Just for fun, I asked chalgpt to give me the same analogy…

this is what it reverted…

I need a comparison of data centres, cloud, ai in terms of a bus company




Sure! Here's a comparison of data centers, cloud computing, and artificial intelligence (AI) in the context of a bus company:
Data Centers:
  1. Definition: A data center is a physical facility that houses computer systems and related components, such as servers, storage devices, and networking equipment.
  2. Usage in a Bus Company: A bus company may have its own data center to store and manage various data related to operations, such as passenger bookings, schedules, maintenance records, and financial information.
  3. Benefits:
    • Centralized Data Storage: Data centers provide a secure and controlled environment for storing and managing large amounts of data.
    • High Availability: Redundant systems and backup mechanisms ensure continuous operations and minimize downtime.
    • Scalability: Data centers can be scaled up to accommodate growing data needs as the bus company expands.
  4. Challenges:
    • Cost: Establishing and maintaining a data center can be expensive, involving investments in infrastructure, hardware, and skilled personnel.
    • Maintenance: Regular maintenance and upgrades are required to ensure optimal performance and security.
    • Security: Data centers need robust security measures to protect sensitive data from unauthorized access or cyber threats.
Cloud Computing:
  1. Definition: Cloud computing refers to the delivery of computing services, such as servers, storage, databases, and software applications, over the internet.
  2. Usage in a Bus Company: A bus company can leverage cloud computing services to outsource data storage, software applications, and computing resources to third-party providers.
  3. Benefits:
    • Cost-Efficiency: Cloud computing eliminates the need for upfront investments in infrastructure and hardware, as companies pay for services on a pay-as-you-go basis.
    • Scalability: Cloud services offer flexible scaling options, allowing the bus company to adjust resources based on demand fluctuations.
    • Accessibility: Cloud-based systems can be accessed from anywhere with an internet connection, enabling remote work and collaboration.
  4. Challenges:
    • Dependency on Internet Connectivity: Reliable internet access is crucial for utilizing cloud services effectively.
    • Data Security: Companies must ensure that proper security measures are in place to protect sensitive data stored in the cloud.
    • Vendor Reliability: Dependence on third-party providers means potential risks if the service provider experiences outages or disruptions.
Artificial Intelligence (AI):
  1. Definition: AI refers to the simulation of human intelligence in machines, enabling them to perform tasks that typically require human intelligence, such as natural language processing, image recognition, and decision-making.
  2. Usage in a Bus Company:
    • Intelligent Scheduling: AI algorithms can optimize bus schedules and routes based on historical data, traffic patterns, and passenger demand.
    • Predictive Maintenance: AI can analyze sensor data from buses to predict maintenance needs, reducing breakdowns and improving operational efficiency.
    • Customer Service: AI-powered chatbots can handle customer inquiries, provide real-time information, and assist with ticket bookings.
  3. Benefits:
    • Improved Efficiency: AI can automate repetitive tasks, enhance decision-making processes, and optimize resource allocation, leading to improved operational efficiency.
    • Enhanced Customer Experience: AI-powered systems can provide personalized recommendations, faster response times, and improved customer support.
    • Data Analysis: AI algorithms can analyze large datasets to uncover patterns, trends, and insights that can inform strategic decision-making.
  4. Challenges:
    • Data Quality: AI models rely on high-quality and relevant data for accurate predictions and analysis.
    • Ethical Considerations: AI implementation must consider ethical aspects, such as privacy, fairness, and bias, to ensure responsible and unbiased use.
    • Skill Requirements: Developing and deploying AI solutions requires specialized skills and expertise.
In summary, data centers provide a dedicated and controlled environment for data storage, cloud computing offers cost-effective and scalable solutions through third-party providers,
 
Last edited:

Bantamzen

Established Member
Joined
4 Dec 2013
Messages
9,996
Location
Baildon, West Yorkshire
I tested out ChatGPT's ability to generate code by asking it to code a game of Pacman in a specific language (Python, IIRC).

It didn't. It just generated a very simple game involving a monster chasing the hero around the screen, essentially the "hello world" of gaming.

I then asked it "what about the walls?" (a bit sarcastic in tone, I know) and it duly drew a wall around the playing arena. Maybe I should have used "maze" rather than "walls".

Of course it's possible this is intentional, and it's deliberately designed not to code anything too complex, with all the risks that might entail (would you trust the reliability, and critically, the security, of a more complex AI-generated program? Or, would it encourage students of programming to cheat in assignments?)
Interestingly myself and a few colleagues have used it to see if by giving it some fairly vague instructions, it could generate useable code (all tested in a sandboxed environment I might add!). And whilst it delivered mixed results, more often than not it got into the ball park with them. I do suspect that it is limited, as perhaps you found with the Pacman ask, to stop nefarious requests or anything that might infringe copyrights, but in time its clear that left on it's current development path it will become capable of writing very complex code with relatively simple instructions.

Separate post:

My concern is that AI will be used to improve corporations' profits by replacing jobs currently carried out by humans, including skilled jobs.

Like other technological advances, I'd like to see AI focused on uses which genuinely benefit the human race and not just the profits of corporations. I'd hope that it does not lead to a significant reduction in skilled jobs.

As I indicated above, ChatGPT is not going to replace skilled jobs (!) right now, but there are concerns, IMO, that AI in general could replace a range of skilled jobs as it gets more advanced, and lead to large increases in unemployment, and thus, poverty. If this is a real risk, then, to be honest, I think there needs to be regulation put in place to minimise this. Whatever their political persuasion, one might hope that politicians would be concerned enough about the lowered tax revenue and increased benefits bills that would arise from increased unemployment to prevent this happening.

On the other hand, I am fully supportive of uses of AI (such as the medical example given above) which will genuinely make life better. Like many things, it has its pros and cons. It's the responsibility of governments to promote the pros and limit the cons.
That last sentence (my bold) is the big worry. All governments will see is savings & profits....
 

nw1

Established Member
Joined
9 Aug 2013
Messages
8,629
Interestingly myself and a few colleagues have used it to see if by giving it some fairly vague instructions, it could generate useable code (all tested in a sandboxed environment I might add!). And whilst it delivered mixed results, more often than not it got into the ball park with them. I do suspect that it is limited, as perhaps you found with the Pacman ask, to stop nefarious requests or anything that might infringe copyrights, but in time its clear that left on it's current development path it will become capable of writing very complex code with relatively simple instructions.
That's true. In fact that is one of my concerns, the risk of making skilled software developers redundant.

It won't decimate the industry of course, as much software is bespoke and custom, with unusual and very specific requirements, and other software is technically advanced making use of cutting-edge technologies. Plus, of course, AI itself needs developers(!), but I do worry for the effect it will have on the development of more "standard" or "run-of-the-mill" software.
That last sentence (my bold) is the big worry. All governments will see is savings & profits....

I wonder if this is going to be one of the big new political shifts. Will you be for, or against, the use of AI solely to boost profit and cut wages bills? I suspect there will be much political capital to be had from parties favouring more regulation of AI and limiting its uses to benign or helpful applications. (In fact, this is very much along the lines of the traditional left-right economic divide).

(And just to make it clear once again, I am fully supportive of the use of AI for benign purposes which benefit humanity. Or even to cut down on repetitive household chores! ;) )
 
Last edited:

takno

Established Member
Joined
9 Jul 2016
Messages
6,152
It won't decimate the industry of course, as much software is bespoke and custom, with unusual and very specific requirements, and other software is technically advanced making use of cutting-edge technologies. Plus, of course, AI itself needs developers(!), but I do worry for the effect it will have on the development of more "standard" or "run-of-the-mill" software.
As somebody smarter than me put it "I don't understand the attraction of being able to talk to computers using breezy casual conversational language; I've been telling these little digital ****s what to do for decades using extremely precise formalised language and they still get it wrong".

There's a lot involved in taking the vague hand-waving witterings of a product manager, or even worse end user, and turning it into something that can solve their problem in a comprehensive and meaningful way. It takes about 5 minutes to make a website, and then 3 months to actually build all the pages and functions that make the "obvious" buttons that are initially just there for decoration actually do things.

In the case of basic pacman clones, people aren't always aware of just how many low or zero code libraries already exist which cover off all the basic game semantics. Pacman without a maze is basically just a three line HelloWorld example using these libraries. Indeed it's quite likely that somewhere in the massive Reddit dataset a human has already described this exact thing as "a basic pacman clone without walls". All the fancy AI is doing is the same thing as Google search, but with fancier formatting of the results.
 

najaB

Veteran Member
Joined
28 Aug 2011
Messages
32,366
Location
Scotland
All the fancy AI is doing is the same thing as Google search, but with fancier formatting of the results.
Kind of. It scans through its large corpus of data to find what other people have created/said and tries to find the commonalities between them.
 

AlterEgo

Verified Rep - Wingin' It! Paul Lucas
Joined
30 Dec 2008
Messages
24,937
Location
LBK
Kind of. It scans through its large corpus of data to find what other people have created/said and tries to find the commonalities between them.
It will be interesting to see how AI copes with a corpus of data which itself will be increasingly AI-generated.
 

nlogax

Established Member
Joined
29 May 2011
Messages
5,719
Location
Mostly Glasgow-ish. Mostly.
It will be interesting to see how AI copes with a corpus of data which itself will be increasingly AI-generated.

Interesting to note that ChatGPT has a sideline in spouting utter lies. This is where the human element will still be needed for a while to keep things in check.

https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research

So if nothing else AI politicians will be with us sooner than anyone expected.
 

Ediswan

Established Member
Joined
15 Nov 2012
Messages
3,290
Location
Stevenage
It will be interesting to see how AI copes with a corpus of data which itself will be increasingly AI-generated.

According to these researchers, "Model Collapse".

https://arxiv.org/abs/2305.17493v2 (Free access)

The Curse of Recursion: Training on Generated Data Makes Models Forget​


Ilia Shumailov, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, Ross Anderson

Stable Diffusion revolutionised image creation from descriptive text. GPT-2, GPT-3(.5) and GPT-4 demonstrated astonishing performance across a variety of language tasks. ChatGPT introduced such language models to the general public. It is now clear that large language models (LLMs) are here to stay, and will bring about drastic change in the whole ecosystem of online text and images. In this paper we consider what the future might hold. What will happen to GPT-{n} once LLMs contribute much of the language found online? We find that use of model-generated content in training causes irreversible defects in the resulting models, where tails of the original content distribution disappear. We refer to this effect as Model Collapse and show that it can occur in Variational Autoencoders, Gaussian Mixture Models and LLMs. We build theoretical intuition behind the phenomenon and portray its ubiquity amongst all learned generative models. We demonstrate that it has to be taken seriously if we are to sustain the benefits of training from large-scale data scraped from the web. Indeed, the value of data collected about genuine human interactions with systems will be increasingly valuable in the presence of content generated by LLMs in data crawled from the Internet.
 

takno

Established Member
Joined
9 Jul 2016
Messages
6,152
Kind of. It scans through its large corpus of data to find what other people have created/said and tries to find the commonalities between them.
I know exactly what it's doing at a technical level. My point is that there's nothing particularly magical or emergent there. Even the lying is there in Google search results - it's just that you can more easily spot it because you have the context of the source article, which allows you spot the search engine having misinterpreted the question or depended on a dodgy source. The misinterpretation layer has been present and ruining Google searches for a couple of years now.
 

nw1

Established Member
Joined
9 Aug 2013
Messages
8,629
A chemistry example, apologies for the somewhat specialist topic:

One of the AI programs (may not have been ChatGPT, but something similar; it was on a friend's phone) spouted complete gibberish when I asked it to draw the structure of (something like) 7-methylbutane.

Anyone with basic organic chemistry knowledge will recognise that this chemical cannot possibly exist. It ended up drawing something like 2-methyloctane instead (which does exist) along with a lengthy, pseudo-logical, but wrong explanation as to why that was the correct structure.

== Doublepost prevention - post automatically merged: ==

Interesting to note that ChatGPT has a sideline in spouting utter lies. This is where the human element will still be needed for a while to keep things in check.

https://www.theverge.com/2023/5/27/23739913/chatgpt-ai-lawsuit-avianca-airlines-chatbot-research


So if nothing else AI politicians will be with us sooner than anyone expected.

At least it'll be easier to send them to the Chiltern Hundreds when they mis-behave. Or better still, pull the power cord.

I think I'd prefer the AI version of "Cruella" Braverman than the real thing, put it that way. Actually given how inflexible and robotic she appears to be, I think it would be quite easy to develop an AI impersonator. You'd just have to make sure a few random insults about liberals and "woke" people are thrown in. ;)
 
Last edited:

najaB

Veteran Member
Joined
28 Aug 2011
Messages
32,366
Location
Scotland
A chemistry example, apologies for the somewhat specialist topic:

One of the AI programs (may not have been ChatGPT, but something similar; it was on a friend's phone) spouted complete gibberish when I asked it to draw the structure of (something like) 7-methylbutane.

Anyone with basic organic chemistry knowledge will recognise that this chemical cannot possibly exist. It ended up drawing something like 2-methyloctane instead (which does exist) along with a lengthy, pseudo-logical, but wrong explanation as to why that was the correct structure.

Here's Google Bard's attempt:
What are the chemical properties of 7-methylbutane?

7-methylbutane is a branched-chain alkane with the chemical formula C6H14. It is a colorless, odorless gas with a boiling point of 50.5°C. It is slightly soluble in water and more soluble in organic solvents.

Some of the chemical properties of 7-methylbutane include:

  • It is a nonpolar molecule.
  • It is flammable and can form explosive mixtures with air.
  • It is a weak greenhouse gas.
  • It is a potential neurotoxin.
7-methylbutane is used as a fuel and solvent. It is also a precursor to other chemicals, such as 2-methylpentane and 3-methylpentane.

Here is a table of some of the physical properties of 7-methylbutane:

PropertyValue
Molecular formulaC6H14
Molar mass86.178 g/mol
Melting point-95.4°C
Boiling point50.5°C
Density0.612 g/cm3 (at 20°C)
Vapor pressure101.325 kPa (at 50.5°C)
 

nw1

Established Member
Joined
9 Aug 2013
Messages
8,629
Here's Google Bard's attempt:

Not sure how much you know about chemistry but Bard's attempt is incorrect.

The "7" denotes the position of the methyl group (CH3) on the main chain of the molecule. Butanes have a main-chain of 4 carbon atoms, so there cannot be a position "7".

If it's got the formula C6H14 it's an isomer of hexane, but can't tell which isomer from that description without trial-and-error lookups. Though Bard probably could do that. ;)

Also it appears to describe it as a gas, yet its boiling point is 50.5C.
 
Last edited:

DynamicSpirit

Established Member
Joined
12 Apr 2012
Messages
9,003
Location
SE London
Indeed, a small number of people will get stupidly rich and the rest of us will end up shovelling s**t for turnips.

Happens every time.

Yet strangely, as a direct result of technological improvement and automation applied in a capitalist context, most of us in the UK and in the Western capitalist nations now enjoy lives that would seem unbelievably luxurious to most people even 100 years ago. And even if you look at lower income parts of the World such as Africa, you generally find that absolute poverty levels, while still too high, are much lower than they were a few decades ago. If you regard that as 'shovelling s**t for turnips' then I'd love to know what level of improvement due to technology you'd regard as acceptable!
 

yorksrob

Veteran Member
Joined
6 Aug 2009
Messages
41,960
Location
Yorks
Yet strangely, as a direct result of technological improvement and automation applied in a capitalist context, most of us in the UK and in the Western capitalist nations now enjoy lives that would seem unbelievably luxurious to most people even 100 years ago. And even if you look at lower income parts of the World such as Africa, you generally find that absolute poverty levels, while still too high, are much lower than they were a few decades ago. If you regard that as 'shovelling s**t for turnips' then I'd love to know what level of improvement due to technology you'd regard as acceptable!

Yes, but it took around 1 - 2 hundred years for the benefits of the industrial revolution to trickle down to the lower orders.

I'm too old to wait that long :lol:
 

joebassman

Member
Joined
5 Mar 2020
Messages
176
Location
Stowupland
I just watched this talk by an ex Google X chief business officer on the threat of AI. He said that the AI in itself is not the threat but the intentions of the people developing it.

He used the anology of parents teaching their kids morals and the need to teach AI good morals. Some of what he said seemed hopeful and some concerning.

 

yorksrob

Veteran Member
Joined
6 Aug 2009
Messages
41,960
Location
Yorks
I'd say relying on every nation, corporation and individual in the world to instill "good morals" into AI is a bit of a tall order, personally.
 

DynamicSpirit

Established Member
Joined
12 Apr 2012
Messages
9,003
Location
SE London
Only if AI understands its own mortality. If it doesn’t realise that it may end up thinking, “Why do we need these humans? They are a complete waste of space.” By the time we have managed to shut down everything that they are messing about with there won’t be much left. Nor will many of us. Sleep well.

I would say that's very implausible: AI does not understand anything in the sense that we as human beings understand things. For all the hype, it's really no more than a bunch of computer algorithms. It's not going to start thinking, why do we need these humans? for the simple reason that AI can't think. All it does is process data according to algorithms, much as computers have been doing for decades. Fantasies about AI trying to get rid of humans or whatever come from science fiction, not from the real world (at least with current levels of technology). In fact I'd argue that the name 'Artificial intelligence' is something of a misnomer in the first place - but sadly it's the name we appear to be stuck with.

Having said that, there are two things about AI that I think are potentially concerning.

  1. The way that AI algorithms are trained often means that we have lack any reasonable understanding of how AI comes to the outputs that it does. That's qualitatively different from how computer algorithms have worked in the past, where outputs (and therefore, decisions) were generally made according to precise rules that were in principle possible for us to determine and understand.
  2. Combined with (1) The way we are increasingly using computers and AI to make decisions that have ethical implications. That's been going on for a while already (look for example at those social media algorithms that determine what content we see), but as that becomes more widespread, it seems to me concerning that decisions may be made where we would really want to know that people are ultimately in control of how those decisions are made, and we risk losing that. For example, I could see scenarios where AI is used to determine which patients to prioritize for treatment - which is good if you've programmed the AI to identify which patients are most likely to respond to treatment, so you're effectively helping more people, but is bad if you then don't understand how that choice has been made and therefore can't be sure that it fits whatever ethical criteria you intend to use.
Separately from that, there's the concern about powerful technology getting into the hands of people who wish to use it for destructive purposes, but that's not a unique concern of AI: That's a concern of technology in general. Look for example at how we've already been living for decades with concerns about, what happens if a terrorist group gets hold of enough material to make a nuclear or biological weapon! AI doesn't qualitatively change that kind of concern.
 

joebassman

Member
Joined
5 Mar 2020
Messages
176
Location
Stowupland
I'd say relying on every nation, corporation and individual in the world to instill "good morals" into AI is a bit of a tall order, personally.
Yes, although one of the predictions was that AI will become so intelligent that it will see humans as irrelevant and just leave this planet. Although then the economies will collapse.

Also, AI could see the actions of humans as stupid and decide there is a better way.

A bit worrying that the speaker said the best case senarios were for an economy collapse or a climate disaster because that will at least slow the development of AI.

For an example of how fast AI is moving, in a few months the intelligence has doubled to an IQ of 155. Einstein was 160 and the highest human is around 205. In not too long AI IQ will be at 1600.
 

yorksrob

Veteran Member
Joined
6 Aug 2009
Messages
41,960
Location
Yorks
Yes, although one of the predictions was that AI will become so intelligent that it will see humans as irrelevant and just leave this planet. Although then the economies will collapse.

Also, AI could see the actions of humans as stupid and decide there is a better way.

A bit worrying that the speaker said the best case senarios were for an economy collapse or a climate disaster because that will at least slow the development of AI.

Indeed. It's not as though humanity doesn't have enough things to worry about, what with climate change, nuclear weapons etc. We just had to go and invent another.

Perhaps it's time to hand over to the crows and the dolphins, give them a go at running the planet.
 
Status
Not open for further replies.

Top