221: Generative AI – Don’t Panic?


On today’s episode of Still To Be Determined we’re talking about Generative AI. It’s a little known topic, so maybe we’re the first place you’re hearing about it. Kidding aside … should we be freaking out about OpenAI?

Watch the Undecided with Matt Ferrell episode, AI Just Changed Everything … Again https://youtu.be/q9_BY2QsK1Y?list=PLnTSM-ORSgi4At-R_1s6-_50PCbYsoEcj

YouTube version of the podcast: https://www.youtube.com/stilltbdpodcast

Get in touch: https://undecidedmf.com/podcast-feedback

Support the show: https://pod.fan/still-to-be-determined

Follow us on X: @stilltbdfm @byseanferrell @mattferrell or @undecidedmf

Undecided with Matt Ferrell: https://www.youtube.com/undecidedmf

★ Support this podcast ★

On today’s episode of Still to be Determined, we’re talking about AI. This is a little known topic. You probably haven’t heard too much about it. So let’s get into it and inform you about this exciting new endeavor. Hi everybody. I’m Sean Ferrell. I’m a writer. I wrote some sci fi. I wrote some stuff for kids.

And I’m just generally curious about tech. Luckily for me, my brother is that Matt from Undecided with Matt Ferrell, which takes a look at emerging tech and its impact on our lives. And Matt, how are you today?

I’m doing great. There’s people who are normal, like normal listeners and watchers of the show.

Um, I’m actually starting to pro the process of installing a little DIY solar installation on my shed in the backyard. I’ve never done DIY before like this. It’s, uh, interesting, Sean. I wasn’t thinking of making a video about it, but I might make a video about it because it’s, uh, kind of, uh, interesting.

Well, you are, I mean, you’re not a prepper, but what you’re doing is effectively. Semi prepper, a little shed that you and your wife and your dog can move into if the zombie apocalypse, or let’s be honest, the AI apocalypse, who knows which one will get us first. We’ll get into that more later. Before we do, we always like to revisit our comments from our previous episode, which was episode 220.

I can’t believe it’s now that many episodes. That’s really kind of a wild number. This was our discussion about solid state batteries. And there were lots of comments like this from Octothorp who weighed in on what it is that is leading people to dig in their heels when it comes to big change. He points out humans have issues with bias, confirmation bias, et cetera.

It makes it very difficult to understand the evolution of technology as it’s happening in front of you. Exactly as Matt mentioned, the thing you learned 40 years ago in school isn’t necessarily true today. Example, dinosaurs had feathers, Pluto isn’t a planet, et cetera, but it’s difficult for us to replace the old information with the new.

I know I faced this. Myself, I recognize it constantly where I will have parallel thoughts in one moment. I will think, this is what I know. And then I will at the exact same time think, ah, but I know that’s no longer true. Recent developments have changed that truth. So it is, uh, an ongoing issue that’s difficult to wrestle with.

Matt, we’ve talked about this before, confirmation bias, and you seek out the answers that you know will under gerd, your already held beliefs. What in your life do you think exemplifies that? What do you do that you recognize like, oh, this is really kind of confirmation bias or me leaning into a bias instead of looking for the rock bottom truth?

Oh man, Sean

I mean, in relation to the channel, it’s like I get a lot of feedback on videos that just like contradict what I’m saying in the video, just like happens all the time. My favorite ones about solar panels and like things I’ve done in my house where people say, oh, that’s not the right way to do it. You should be doing X, Y, and Z.

Um, half the time I have that knee jerk reaction of like, oh, that’s not the case. No, no, that’s just wrong. It’s completely wrong. But then I try my best to try to break through that confirmation bias of, no, maybe they have a point. And so I’ll go do a little research. I’ll do a little digging to try to force myself to keep an open mind and to look into it.

Even though my initial gut reaction was there’s no way that is true. And I’ll dig into it. And sometimes I find out I was absolutely correct. Now, other times I’ll find, well, they’re not exactly a hundred percent correct, but they, they were onto something. And so it kind of helps keep my mind open. So it’s like, for me, it’s a constant challenge.

I know it’s a human, it’s human nature. So it’s like, for me, it’s a constant challenge pretty much every single day of being challenged against what I think is the norm, and sometimes I’m proven wrong. And so it’s like, I have to kind of re kind of like calibrate my brain a little bit.

I think there’s a similar, a similar component of this that I exemplify all the time.

And you and I just talked about it before we started this recording, my bias toward leaning into all of my previous habits around a thing. So leaning into the idea of, well, I did this thing before by myself. I should be able to do it by myself now and not questioning my motives for doing it that way, my value of my time, my value of my money, and breaking things up into, is this worth me spending time on it in this way?

Is I think my sort of unconscious bias, I lean in immediately to, if I want a thing, then I should do it myself. That’s how it should be done. And it doesn’t always serve me because what I end up with is the thing that is 15 percent of what I envisioned. And I know if I had just taken a different approach, I could have gotten my goal if I just brought in a professional to do it.

And it’s not like I’m doing anything dangerous to myself. I’m not like rewiring my home or putting in new gas lines. I’m talking about creative projects and those times when, Oh, my creativity would actually be better expressed by bringing in outside help as opposed to, Oh, but I should do it myself.

There were also comments about our question to the viewers and listeners. Do they want to see a follow up video on the solid state battery discussion in the form of sharing the tech that is actually already available in that regard. And there were comments like this from Thalgal, who said, I think a short video or maybe something comparing available products could be cool.

The right tool for the right job means as the public needs to understand the benefits we need to learn about them. And I’m sure I could look it up, but I’d rather watch your content and make you do the work. Smiley face. Thank you for that Thalegal. We’re very happy to be a resource for people. And we do try to find true, valid, honest information to share with everybody.

We do not want to be brokers of misinformation. There were also comments like this one from, and I’m, it’s going to not be easy to roll off the tongue, but I’m going to give it my best shot. Barbarudra, who in referencing the introduction of new things into our lives and how difficult it is sometimes to incorporate new things into our lives and understand what is meant by this new thing.

Quoted one of my favorite all time jokes from the Simpsons. Have you heard about this internet thing? It’s the inner netting they invented to line swim trunks. So thank you for that, Barbarudra.

This is me jumped in to say, I’m very curious about solid state batteries for a whole house backup. I have uninsulated detached garage that would be optimal location for a battery and solid state sounds like it might be the better solution. It’s interesting, this is me, you frame it as like the better solution.

Solid state is merely a battery solution. So that’s. It’s not like one or the other, they’re all part of the same big umbrella topic. And maybe that’s a moment, Matt, for you to jump in to the. Revisiting the idea of a follow up video, what things are available right now that might solve this is me’s question.

Before we go, go forward on that note, there is a company that has a solid state whole home battery system that was supposed to be released. They’d been taking pre orders, uh, end of last year. But it’s still not out. They’re still doing pre orders and I’ve been trying to get in touch with them to find out, like, what’s the deal with that?

You were supposed to be on the market last year. So it’s, there’s definitely tech that should be here, but it’s not for some reason. So that could be in that next video that I do a deeper dive on it.

Finally, I wanted to share this little comment that I received. I’ve mentioned it. I believe I mentioned it in this podcast.

I must’ve, uh, I had a Kickstarter running. I’m doing some adventure writing for Dungeons and Dragons. It just recently closed and it was successful. And I received a comment there in my Kickstarter project from Andrew Croft, who I imagine must be also here. So Andrew, thank you for your comment, which was, I was undecided if I was going to back this.

But I am glad I did as I heard solid state goblins are less than five years away. So thank you so much, Andrew, for that comment. I imagine that comment means you’ve been aware of the discussions we’ve been having on this side. So thank you for dropping into the Kickstarter as well.

It’s funny.

It’s funny.

On now to our discussion about Matt’s most recent. This is AI just changed everything again. And he released this episode on May 28th, 2024. And this is a, I get the impression that what you’re trying to do is pull the camera back a bit and give some runway to the topic so that when we get into the nuts and bolts of is AI ethical, is it useful?

Is it scary? We actually have a better understanding of the context. Of what we mean by AI and how we’ve already been waiting in it for a while. It’s a little bit like. Uh, the forest for the trees metaphor, we are looking for the forest. If only these trees would get out of our way, we’d be able to identify when we get there.

And so it is your, your lead into this going back decades to the idea that we have been utilizing AI in various forms. We’ve all become very accustomed to it. It makes sense that we would be unaware of AI in the form of marketing, but obviously there’s a reason why Netflix has been like saying like, Oh, you like Star Wars?

Maybe you’d like this as well. Or Amazon saying, we noticed that you might want some light bulbs because you just bought a lamp. That is AI at work. It’s easy to understand, I think, why we would be blind to that because we’re accustomed to advertising. We’re accustomed to walking into a store and seeing banners of different products we might want.

And we just kind of incorporate that into our day in a way that’s different from saying, Oh, would you like to create a picture? And you can describe the picture you would like to see and then magically it will appear and then you can edit that picture and you could take that picture and maybe do something with that picture or maybe create a video.

Of maybe a well known person doing something really untoward and use that to convince people that that person is evil. And that’s where it begins to be that bleeding edge of where are we? What is going on? What is this? So, having said all of that as a setup to like what I envision your video was intending to do, I am wondering, is there one big takeaway about the topic that you were hoping your video brought home?

And if you would like just kind of bluntly state it here so that it could be a kind of anchor for our discussion as we move into different parts.

This is not going to be succinct to like a sentence, but it’s it is the whole AI is not new. It has been here for a very long time. We’ve been using this stuff for a very long time because right now we are inundated with every company’s like, we got AI this and AI that.

It’s like everybody is, the AI bit crazy train is in full effect right now. And it’s making it feel like, oh, this is new and exciting and this crazy stuff. And if you look at most of the stuff that’s being marketed, half of it is completely useless BS. Like, If you try to use the tools, they’re awful. They don’t really do what they’re advertised to do.

They’re being oversold. Some companies are saying they’re doing AI stuff when it’s in reality. It’s like they’ve already been doing that for five years, ten years. But now it’s a marketing term now. So they’re getting on the bandwagon, even though it’s not new for them. They’ve been doing it for all, for all along.

It’s just, I was trying to give context to ignore the AI crazy train that’s happening right now. We’ve been doing this forever. The only difference is, is that tools have gotten so efficient and easy to do that it’s become productized to consumers. So it’s like there’s now tools we can easily take advantage of, where before you had to be computer engineers or have the resources behind you to be able to justify the expense of running these algorithms to create models that you can do crazy things with.

That’s gotten boiled down to the rest of us now. It’s like trickle down AI. So it’s like, I just kind of want to give context of that, of like, it looks like everything’s out of control and happening in the past two years, three years, like it’s been happening at light speed and it has been moving fast, but this train’s actually been going for 50 years.

Let’s just tamp it down a little bit. That was the main thing I was trying to drive home is don’t panic. There’s this is. We’re still in that learning phase of this whole thing, um, even though it feels like it just kind of came out of nowhere.

I wonder if you could look at the flip side of what you just said.

Is there one element that you felt you had to leave out of this video? Because it would have opened up too big a discussion or gone too far afield. How did you keep the rails set up for what you were trying to cover so that you didn’t end up stumbling into a, Oh, and now we’re going to talk for 15 minutes about this other side of it.

Yeah. The stuff that we kind of had to make sure that we were on rails was not going too far down the policy route and how people and how governments and are reacting to this or not reacting to this. Um, we talked about the ethics and how some of these companies have just like steamrolled over everything.

And we, that was a rabbit hole because that could have been a video on its own. As we were pulling this together, I reined the team in as we were doing research, I reined them in to try to keep it on rails for what we were doing, which was the context. Here’s the history of where we are, um, but I didn’t really deal with anything around the, here’s what this tool is, here’s how it works, here’s the cool stuff you can do with it.

And I stayed away from that completely because, like I said. What Google’s saying about AI versus open AI versus Apple not doing stuff and Microsoft doing things. It’s like there was so much we could have like peeled the onion on like all the different ways it’s being used and all the different ways it’s screwing up and all the different ways that it’s kind of amazing.

We just stayed away from that. It was like there are so many videos out there that have already delved into that. I wanted to kind of bring a different kind of angle to the conversation.

Would you think of doing a series of videos on this and revisiting some of those areas. Or do you feel like it is a topic?

I mean, here’s the thing about it. I say this because I, of course, know you, I would want a reliable source to provide some conversation around that. And I view you as a reliable source. I know you’ve got a team. I know you do deep research on these things and you try very, very hard to practice a journalistic ethics that doesn’t just.

Like you’re not going to AI and saying, write to me a script about AI and then just like vomiting that information back out to us. So there’s, there’s, uh, there’s that, but I also do understand retreading ground that’s already well trodden. Is maybe not what you want to do as well. So do you think that this is something where you would slice it into different pieces of the same pie and take a look at some of these different angles?

Or do you think Oh, yeah.

That’s been something that’s been, like, when I was talking to the team about this specific one, it was kind of like, this was me dipping my toe in the pool, see how people were interested in it. Yeah. Based on the reaction to this video, there’s other ones that I’d be interested in making that would branch off of this and get a little more detailed on to specifically open AI or specifically into Microsoft or specifically into how some of this technology is being used for sustainable energy or fusion research.

It’s like, there’s some really cool stuff that we could dive into. Um, and there’s news spring up all over the place. Uh, like, There’s AI that’s finding, being able to diagnose an X ray for certain kinds of cancer. And it’s predictive where it’s able to predict with a very high accuracy that a certain kind of, I think it was lung cancer.

I can’t remember what it was off the top of my head, but it was like, it was able to predict six months ahead of what a human doctor was able to do. So like it was a human doctor had to get to a certain point on the X ray to say, okay, you have cancer or you have signs of cancer. The AI was able to detect the same possibility of that happening in an x ray that had happened six months earlier where the human doctors were like, I don’t know what it’s seeing.

So it’s like, there’s some really cool stuff that’s happening with AI that just kind of blows your mind that could really impact our lives in a really positive way. So much that we could talk about. There is so much you can talk about

because there are so many ways to slice this topic and there is the policy government interface with all of this. There is the ethical use or lack of ethical use of this from a misinformation. There was just an article in the New York Times about a US citizen, former sheriff from, I believe, Florida, who has now been given, he’s in Russia. He has asked for protection from Russia, where he is running a misinformation mill that uses AI to quickly generate videos and articles that masquerade as well known sources, CNN, BBC, stuff like that, and then releases them into the internet, into the wild, to be able to paint negative pictures of politicians.

And that is, It begins to feel like, are we talking about AI? Should we be talking about AI the way we talk about gun manufacturers? Is there a side of this where it’s Here’s

the interesting thing though, Sean. And this is the part that isn’t discussed a lot. You can use AI to detect AI. So it’s like, so you have people using AI to create misinformation.

And you can have AI try to suss that out, right? And in real time, basically label that information as this is misinformation, right? Here’s the provable facts. So it’s like, there there are, there’s like the white hat, black hat thing. It’s like, yes, there are people using this stuff in very nefarious ways, but there is a very easy way for people to use it in a very positive way to combat that. We’re kind of entering kind of like a AI misinformation war . That’s, that’s kind. I think that’s a good framing. Kind of fascinating.

That’s a very interesting framing and that’s why I, I, I mentioned, should we be viewing this the way we view gun manufacturers? A gun, it’s like a gun is a tool that does a very specific thing that if you are buying guns to go hunting, I’m like be careful while you’re hunting.

If you’re buying guns to try and kill people, well, that’s a totally different conversation. And, um, I wonder if there’s the beginnings of recognizing, like, we can’t throw it all in one basket because we see applications like AI was involved in helping find the coronavirus vaccination for COVID. Um, you talk about AI’s identification of cancer in x rays that doctors aren’t able to identify until six months later.

Right. These are very clearly like, these are good things. Why would we not want good things? And then the other side is somebody running a mill. One man operation sending out millions of bits of misinformation that are going to skew people’s perception of reality. And yeah, you point out AI can detect it.

Well, unless the entities that allow AI to come into our computer screens, implement those tools, the individuals who are receiving that information are not of themselves going to run that. And that’s, you know, the users on Facebook who are getting all the misinformation thrown into their feed are not stopping and saying, like, maybe I should figure out how to get some AI to apply to this, to see if these are true.

Like that’s the, that’s the difficulty is that it’s,


It’s intertwined in how the information comes to us without us being aware that it’s even in the, it’s in the feed itself. So that’s where it starts to get complex. I’m also wondering about like some of the responses you might have to some of the comments on this, like from Moose, who said, what troubles me most about AI is our politicians aren’t knowledgeable enough to regulate the AI properly.

And they are involved in some cases in direct relationships with the companies. He uses the term corruption. I wouldn’t even necessarily say it’s corruption. It’s just. When you have politicians getting donations for their political efforts from companies, I don’t know that that rises to the level of corruption, but it does rise to the level of tainting the relationship.

Um, so when it comes to the people who have to set the policy and makes the rules, and you’ve got these blurring of ethical lines due to relationships and a lack of information and ignorance about a thing. What do you, Matt, say, like, what are some paths that might help bridge some of those gaps? Oh man,

you just asked like the million dollar question, Sean.

Uh, yeah, I don’t know, I don’t know, Sean. The answer is, I don’t know, because it’s one of those, I agree with, uh, Uh, Moose, 5 2 2 1. I agree that there is a problem here because a lot of the politicians are so far from being knowledgeable enough to make educated decisions on how to approach this and it makes me worry about who they’re relying on.

To break this down for them, to give them recommendations on how to work on it. I have zero faith that our systems can adapt with this quickly enough. By the time they get caught up to where AI is right now, AI is going to be It’s so far down the road, um, because it’s advancing very quickly. Uh, I, I don’t know what the answer is.

I really don’t. Other than to, if you have knowledge on this stuff, to reach out to your representative and speak up, um, give them recommendations, write them, call them. It’s like, that would be my recommendation is to, to be active in this discussion. We cannot be passive participants in this at all.

We can’t be voicing our concerns.

Let me jump through some of the following comments, which tie directly into this, and then, and then give a suggestion that I found, uh, at the end of it. All right. Some other comments included this from Bumpty, who said, hold these tech companies accountable in quotes. How exactly? The corporations don’t answer to us.

The government doesn’t even answer to us. Without explaining how, suggesting we hold them accountable is totally empty. I don’t disagree with you. Uh, Bumpty, that us saying we’ve got to hold these companies accountable is the easy side of it. The hard side is knowing where’s that lever. So uh, we appreciate your pushback on that.

I would say there are ways to hold them accountable. There are groups that have like, um, the EFF, is that the, is that it? The EFF? I can’t remember the, it’s a group that’s about, um, was it the Electronic Frontiers Foundation? I think that’s They’re taking action on this. They’re doing class action lawsuits against these companies, taking the court of law and showing how people have been damaged.

There’s groups of creatives, people who are writers or artists that have gotten together and are suing OpenAI and these different companies for taking their work without permission. So there are ways we can hold them accountable by doing this stuff. The other thing you can do is like, I don’t agree with this sentiment that we Have no control over corporations.

You can boycott that corporation. They make money from us. So it’s like if we stop using their product, that’s one way we can hold them accountable in addition to the judicial system and suing them and trying to get things moving that way ourselves. So there are things already in effect right now. It’s going to be slow because the judicial process might take a couple of years for things to completely work out, but there are things already underway trying to sue these companies and trying to slow them down.

And trying to get them to make good for all the bad stuff they’ve done. There, there is stuff already happening.

Yeah, there’s an absolute reliance on unified pushback is going to have a larger impact, I think, than individual attempts to boycott. And yes, I say that just based on, you know, there’s evidence to show that when politically minded movements say, stop, you know, stop purchasing from this company because they supported X, Y, or Z, that it very often has very little impact and in some cases can create a counter response that actually increases sales.

So it’s, it’s sometimes, um, unfortunate in that regard, but the union unified And often that takes the form of union response to this is one thing. And you mentioned, uh, creative industries pushing back. Um, I, as I mentioned in my lead in and in the closing, I’m a writer and I have had, one of my books was stolen, let’s not put too fine a word on it, uh, stolen by

uh, OpenAI, when they were trying to teach it how language worked, they did so by going to pirate websites and just lifting hundreds of thousands of books and putting that into the undergirding software of the AI system so that it was learning how language worked and how to interact with language in the way that would be required for public use.

When they were pushed back on this, the response from the tech companies was effectively, well, if we can’t steal it, then it’s going to be too expensive for us to do this, which. You stop for a moment and think about that as an argument. It is really shows the disconnect between where they think they’re going and how the world actively works.

Um, that’s being responded to by the Authors Guild, which is effectively a professional association, like a union for authors. And the Authors Guild is in the process of a lawsuit against those corporations. Another example of this, I think, is Just last year, and we’re still, I think it’s funny that we’re still seeing the slow rollout of the impact of this.

We had the Actors Guild and the Writers Guild, uh, strike against film and television. And it created, uh, there are still moments where my partner and I will be watching television and she will say, well, how come this season is so short? Or how come the season doesn’t seem as well written as the previous one?

And I’m still reminding her, like. This is probably an echo of the strikes. This is what we’re seeing. We’re seeing that. A key aspect of those strikes was to push back against the use of AI. It was a very timely moment for those unions to be negotiating new contracts as AI was just getting into the news in the way that we now are accustomed to it.

If those strikes had taken place three years earlier, They probably wouldn’t have incorporated the protections that are in place now for those industries. Key aspects of that were you can’t use AI to recreate an actor and place them in a program, or how many writers are required to be on staff for a writer’s room to be considered legit.

They, the industry was headed the direction of saying like, we’re going to have a sitcom and we’ll have two writers and then the rest of the scripts will all be written by AI. And those two writers will just polish them to make sure that they’re good enough. That means that we don’t need 12 people in the room anymore.

It’s a cost saving measure. And it is done for the bottom line. And as we see more and more products that come to television and theaters that cost a hundred million dollars to make and make 50, the industry is looking for ways to cut costs. Heaven forbid that they go the other direction and make smaller projects that are better.

Like that’s too difficult. Um, but we’re seeing industry pushing back against this, and that is part of what’s required. The other side of this, I was gonna say for us as individuals to be well-informed, there are some infor sources of information that I think are reliable, like ai ethics initiative.org.

It is ai ethics initiative.org, and it is a project between MIT and an institute at Harvard. And it wrestles with all of these questions. They have some articles on their website, and there may be ways that people can take this and use this to share it with their politicians. As Matt said, if you’re looking to help inform your representatives, Taking this information, sharing it with your representatives and linking them to this might be a way of saying like, you, I’m afraid you don’t have proper information.

I want to make sure you understand what’s going on to your representative. The work of this initiative is to look at the ethical impact, the use cases like AI in law enforcement, big, big, scary terrain for us to say like, Oh, if AI is identifying people as they’re getting onto planes, what happens when it’s mis identifying people or what happens when you as an individual get flagged?

And there’s no way for you to un flag yourself within the system. Issues like that. It’s a big, thorny topic. And Matt, is there anything, like, jumping off of this conversation, and inviting people to jump into our comments, is there anything that you’re curious about, our viewers and listeners, that you want them to share with you, so that you know what direction to start looking as you build more content around this.

It would honestly just be letting me know what areas of AI you’re most interested in. Like, what, like all the avenues we’ve just talked about today. Are you interested about AI ethics? Are you interested about where it’s being used for positive? Are you more concerned about where the negative is?

Like, I’m really curious what

you more about. So please do jump into the comments and let us know what you thought about this discussion. And as Matt suggested, let him know Where you think he might steer the ship in the future. Thank you so much. Your comments do really impact the show and they help Matt with the Undecided with Matt Ferrell program.

And don’t forget if you’d like to support us in other ways, you can leave a review. You can share us with your friends and please don’t forget to subscribe where it is, wherever it is you’re picking us up. We are available pretty much everywhere. If you’d like to more directly support us, you can click the join button on YouTube, or you can go to StillTBD.FM, click the become a supporter button there. Both those ways allow you to throw some coins at our heads. We appreciate the welts. And then we get down to the business of making the podcast. Thank you so much, everybody, for taking the time to watch or listen, and we’ll talk to you next time.

← Older
Newer →

Leave a Reply