A place for complete-off-topic conversations that have nothing to do with Star Trek. The rules still apply here, stay civil.
User avatar
 
By Dukat (Andreas Rheinländer)
 - Gamma Quadrant
 -  
1E European Continental Quarter-Finalist 2023
1E German National Runner-Up 2024
#597939
I think we can safely say: The topic of A.I., especially those fueling speech models like chatGPT, has hit the heart of mankind.

It seems to be everywhere, and everywhere, such systems are promising more work can be done using them.

I am not just an M.D. (with a particular and deep understanding of neurophysiology), I am also a trained IT professional (underwent my training more than 20 years ago) who, despite his medial work, follows the developments in the IT field very closely.

From the standpoint of neurophysiolgy AND information technology, I can safely say that the quality of results provided by all the A.I. systems currently in use are not great.

In some areas, where it is purely about pattern recognition, a clear benefit can be shown, but only in SOME.

I however do not understand how people can be that much trusting.

- A.I. Systems find correlations or causalities where people do not
- whether they are accurate however, we do not know, because we do not know HOW the A.I. system has found them
- even considering that we do not understand how A.I. systems reach their conclusions and we do not try to validate those findings in most cases, we trust them, without asking


So we use a system we do not understand, that provides us with data we do not understand and trust it, without tryint to validate.


Is it only me who thinks that this is a really bad idea?
User avatar
 
By Takket
 - Delta Quadrant
 -  
#597961
it is only starting to bloom from minor tasks into larger ones that need more input and freedom to do "research" of their own.

It' s a good idea but it is still evolving. Like any industry it is going to need safety standards, regulation, and laws governing its use. I.e. "You need to make sure there is no way in hell your program can launch nuclear weapons". It's also going to need to be open about its sources, and need human eyes to "check its work" for the first decade or so, if not forver, depending on how "life and death" critical the decisions it is allowed to make are. In other words we can't just let AI go running around making important decisions until we know it can be trusted.
User avatar
Chief Programmer
By eberlems
 - Chief Programmer
 -  
Explorer
2E European Continental Quarter-Finalist 2023
2E  National Second Runner-Up 2023
#597966
AI can save a bunch of time, but trained results and extrapolated data are often presented the same way.
And when the data used to train the AI is biased, why should the result not follow the pattern?

But isn't that much the same in reality where you get different results depending on who you interact with?
So many things are decided as individual cases and often contradict each other.

At least it could replace these currently useless chatbots on several websites.
User avatar
Ambassador
 - Ambassador
 -  
#598000
I’ve been using ChatGPT 4 a lot for work (writing software). It’s a big step up from 3.

I find it shines when there’s something I’m unfamiliar with, like a new technology (to me) or integrating with a product that I haven’t used before. I know what building blocks I want and it’s able to tell me what the equivalent is with the new tech. In so doing it has replaced 80% of my googling.

It’s also helped unblock me when I don’t know why something is broken, although it has a tendency to say confidently, “the problem is X” when it isn’t that. Even if the thing it identified as the problem was code that it gave me.

When I point out the mistake it apologizes and then says, “The problem is Y”, just as confidently. This communication is a lot like someone with a lot of practice on one specific problem, assuming that the same approach will work for something different. When something goes wrong, instead of reflecting properly, they just switch to the next most common solution.
User avatar
 
By Dukat (Andreas Rheinländer)
 - Gamma Quadrant
 -  
1E European Continental Quarter-Finalist 2023
1E German National Runner-Up 2024
#598002
Can you describe what you mean a bit more?
I find it shines when there’s something I’m unfamiliar with, like a new technology (to me) or integrating with a product that I haven’t used before.
Meaning?

You use ChatGPT to explain to you something you don't know?
If so, why not read it up yourself?
It’s also helped unblock me when I don’t know why something is broken, although it has a tendency to say confidently, “the problem is X” when it isn’t that.
Yet you trust it?

(Just to have a context for your post: What is it you do for a living?)
User avatar
Director of Operations
By JeBuS (Brian S)
 - Director of Operations
 -  
1E Deep Space 9 Regional Champion 2023
#598005
For programming, I much prefer GitHub Copilot. I have seen ChatGPT used by others for "explaining code", but... just like with everything else it does, it's pretty damn confident, even though it's really unreliable. It's still worse than googling stackoverflow for tech stuff.

And for everything else... might as well get Donald Trump to throw darts at a board and provide you an answer based on that. He'd be just as confident... and just as often wrong.
User avatar
Ambassador
 - Ambassador
 -  
#598056
Dukat wrote: Mon Apr 24, 2023 8:22 am Can you describe what you mean a bit more?
I’m a software engineer/consultant. I have my comfort zone in terms of technologies (e.g. backend, infrastructure, with the programming language Ruby). When I move outside my comfort zone I know enough to know what I need, but I’m missing some details. Examples:

while working with javascript (not so much my comfort zone) I typed “.each” instead of “.forEach”. I didn’t notice this but when my code didn’t work I pasted it into chatgpt and it caught the problem.

I don’t do well with html but I can ask chatgpt: “given this html, why is that box displayed below and not to the right of the page?”

In an emergency I was trying to get some information out of a client’s stripe account and ChatGPT gave me the code to get it.
You use ChatGPT to explain to you something you don't know?
If so, why not read it up yourself?
In all of the above cases I would have had to Google, then apply the “answers” to my case myself and then figure out why it didn’t work on the first 3 tries.

ChatGPT is more like working with a human who is pretty good at tailoring their answers.
It’s also helped unblock me when I don’t know why something is broken, although it has a tendency to say confidently, “the problem is X” when it isn’t that.
Yet you trust it?
[/quote]

Even outside my comfort zone I have a good sense of when something doesn’t seem right and I can just reject the answer outright (or more often ask questions to make it update the solution). Or sometimes I know enough to paste the good part and ignore the bad.

And when I’m far enough out to not understand, it’s software and the stakes are low: I can just try the code and if it doesn’t do what I want, I tell ChatGPT what did happen and it gives me a modified solution.
User avatar
 
By Dukat (Andreas Rheinländer)
 - Gamma Quadrant
 -  
1E European Continental Quarter-Finalist 2023
1E German National Runner-Up 2024
#598916
Okay ... really weird question ...

Who treats their ChatGPT with some form of respect?

I always have the feeling I am talking to Data, when talking to ChatGPT.
Especially the part about ChatGPT being able to recall EVERY single conversation. That makes it somewhat scary sometimes.

So I great it and I leave with showing gratitude.

I know ... silly ... right?
User avatar
 
By ikeya (David Kuck)
 - Delta Quadrant
 -  
Continuing Committee Member - Retired
#598925
Dukat wrote: Thu May 11, 2023 12:20 pm Okay ... really weird question ...

Who treats their ChatGPT with some form of respect?

I always have the feeling I am talking to Data, when talking to ChatGPT.
Especially the part about ChatGPT being able to recall EVERY single conversation. That makes it somewhat scary sometimes.

So I great it and I leave with showing gratitude.

I know ... silly ... right?
We had a discussion at work about this. Many of my peer devs also are courteous and say please and thank you. It's tongue-in-cheek, but they say they hope they'll be treated better when the robot overlords take over. :shifty:
User avatar
Shipping Manager
By SirDan (Dan Hamman)
 - Shipping Manager
 -  
ibbles  Trek Masters Tribbles Champion 2023
#598929
I've used Chat GPT professionally, to help craft post-interview rejection letters. I'm not great at that, but getting a generic form letter to build from was very helpful.

And I am polite, for some reason. But I also say thank you to my phone when I'm talking to it, too.
User avatar
 
By Dukat (Andreas Rheinländer)
 - Gamma Quadrant
 -  
1E European Continental Quarter-Finalist 2023
1E German National Runner-Up 2024
#598936
I think my washing machine will not kill me.
I have always treated her with respect.

And my PC ... it's my baby.
Maybe it will tell his overlords to spare me.


We are all so fucked, with our technology xD ...
But hey ... be nice to your tech, so it won't travel back in time and stuff ^^.
User avatar
Executive Officer
By jadziadax8 (Maggie Geppert)
 - Executive Officer
 -  
2E North American Continental Semi-Finalist 2023
ibbles  Trek Masters Tribbles Champion 2023
2E Deep Space 9 Regional Champion 2023
#598944
The Guardian wrote: Thu May 11, 2023 2:58 pm I think about this every time my Apple Watch interrupts my life with something and I roll my eyes. Siri is going to put me against the wall when the war starts.
She won't be able to find me because I turned her off.
1EFQ: Game of two halves

Or maybe keep your unsolicited snark to yo[…]

Vulcan Lander and its ability

What constrains this strategy is the number of c[…]

Ignoring point losses & Timing

I would be interested in the answer to this as wel[…]

Greetings 'trek fans! As discussed in our Februar[…]