Powering Progress: Insights on AI from Investment Strategist Roosevelt Bowman

Audio Description

What should investors know at this stage of the AI boom? Bernstein’s Roosevelt Bowman offers a nuanced perspective informed by his expertise in data science.

Transcript

This transcript has been generated by an A.I. tool. Please excuse any typos.

Stacie Jacobsen: [00:00:00] Thanks for joining us today on the Pulse by Bernstein, where we bring you insights on the economy, global markets, and all the complexities of wealth [00:00:15] management. I'm your host, Stacie Jacobsen.

Last summer, we hosted Bernstein's Lei Qiu on the podcast to talk about the transformative potential of AI. Now that we've had more data available and experts have had time to [00:00:30] analyze the growing impact of AI, we figured it was a good time to revisit the topic. From a consumer perspective, the focus is on the output of AI models like generative images or chatbots.

But for investors, what happens behind the scenes is more important. [00:00:45] As we saw in 2023 companies that make the semiconductors needed to power AI computing saw their share prices go up dramatically. As AI matures and the paradigm shift becomes a new reality, investors will need to understand the technology and intellectual capital that [00:01:00] support the AI ecosystem.

Data collection and storage energy for computing needs and cloud computing are all areas where we could see significant innovation and growth. So to gain a better understanding of how these pieces work together, we're joined today by [00:01:15] Bernstein Senior Investment, Strategist Roosevelt Bowman.

Roosevelt, thanks for being here.

Roosevelt Bowman: Thank you for having me, Stacie. I appreciate it.

Stacie Jacobsen: Look, I'll start by saying that AI is a hugely complex topic, and we could really take this conversation in many directions. But [00:01:30] where I'd like to start today is with the data. What should we know about the role of data in creating the successful AI models?

Roosevelt Bowman: So if you think about data and the importance in creating the models, it's really just like a recipe. So I like chocolate [00:01:45] chip cookies. I'll stick with food. If you have the freshest and best ingredients, it's gonna taste pretty good. If you're kind of rummaging in the back of the cabinet for some stuff that's pushing up against the expiration date, your outcome is not gonna be nearly as [00:02:00] tasty.

So, it's really similar. It is a garbage in, garbage out if you have bad data or data that maybe isn't structured the way you need it. I'll come back to that in a moment. Then you end up with a model and outputs that are not terribly useful. So the data really is the, [00:02:15] the crux and the foundation of a successful, useful artificial intelligence model.

Stacie Jacobsen: Okay so let's say we do have those ingredients that are at the front of the pantry. What are the next steps then in terms of building out these AI models?

Roosevelt Bowman: Sure. So I [00:02:30] think it really comes in in two parts or two forms. When you think about the data, the structured data would be your nice, normal, already formatted, easy to understand information.

So you're looking at a company, you know how much they sold, you know how much revenue they [00:02:45] got for their sales. Pretty straightforward. The unstructured data would be more of what you observe, or data that comes in a text form that isn't nicely and neatly packaged. You're thinking about foot traffic in a mall, or text messages from [00:03:00] consumers to the company that they're buying products from.

So I think that's an example of unstructured data where you really have to take the raw information and convert it into a usable form. So once we have those two sets of data, both structured and unstructured, [00:03:15] then it's an iterative process. You're not gonna come up with the model that produces the really good output the first time.

You're gonna have to keep working at it, working at it, working at it until you get a really good solution.

Stacie Jacobsen: When we think about the form of AI that most people, [00:03:30] including myself, that are exposed to, it's that large language model or the LLM chat. GPT is the prime example here. What does it take from an energy perspective to power the LLM models?

Roosevelt Bowman: Sure, and we think about these LLM [00:03:45] models, you know, what are they producing? It's, as you mentioned, Stacie, so many of these really interesting outputs. It's looking at. Summarizing a document, taking texts, converting it to an image. These are all fascinating kind of developments over the short period of [00:04:00] time, but they do require a lot of energy.

You know, just to level set, you think about these data centers, which certainly will house not only the models, but all the information, the vast amounts of data that support LLM models. They often take up and [00:04:15] use 50 times as much power as a normal commercial building. And that's not considering the fact that LLM models use a lot more information and data and energy than your normal AI model.

So I think going forward, [00:04:30] that's the most important question, right? Is the tradeoff between these tremendous outputs, but all of the energy that's being consumed and the carbon emissions that come with it.

Stacie Jacobsen: So Roosevelt, we've identified that there is an energy usage issue. [00:04:45] What are we trying to do to solve that?

Roosevelt Bowman: There are a couple of solutions and they're already being implemented, but they do have limitations. The first would be we're looking at the semiconductors, the computer chips that are running these models. Let's make them smaller, let's make them faster, let's make them more [00:05:00] efficient, uses less energy.

That's without a doubt, one way of addressing the issue, but there's a limit to how far we can push that innovation. I think the other would be the data. If we go back to our structured versus unstructured [00:05:15] example, if you have data that really isn't format in the way you need to use it takes more energy, more computational power to get it in the right format to then run the model.

So if you can kind of format the data better, that will also save some [00:05:30] energy. Again, there's a limitation to that solution. We will be confronted, I think, rather quickly with that trade-off. Are these fantastic outputs worth it given the amount of energy that the LLM models use and the carbon emissions [00:05:45] that are associated with the process?

Stacie Jacobsen: It kind of feels like the, uh, train has already left the station on that one though. Is there really any way to pull it back?

Roosevelt Bowman: There isn't, and you're absolutely right. I think it's, it's really a matter of understanding, hey, how far can we push some of the [00:06:00] innovations and then maybe streamlining the usage of it.

Without doubt. It's a key tradeoff.

Stacie Jacobsen: One of the other tradeoffs that is on the mind of many is AI potentially taking away jobs, right? Having machines do tasks that humans are currently doing. [00:06:15] What are your thoughts on that?

Roosevelt Bowman: You know, when you look through history, you see this worry a lot. Whenever there's new innovative technologies that emerge, it's all the jobs are gonna be taken by a program or computer or a robot. There will be replacement. I [00:06:30] don't mean to be flippant about it at all, but I think the first stage is much more of this collaborative effort. I. That improves the productivity of existing workers rather than replacing them.

So, you know, I'm a data scientist. I've built machine learning models where it gets [00:06:45] the highlights or provides me with the highlights of a document. So for a 90 page research piece, all of a sudden I can read 20 pages and gets some really important information to get to our clients a lot faster. So. The program did not replace me.

I still need [00:07:00] to be there, be in the meeting, answer the client's questions, but I'm able to do that much more efficiently than I would be able to do in the past. And I think that to me is the stage that we're in now. And without data, if you have people that are open-minded about using the tools [00:07:15] and with some technical proficiency, there's really some great productivity gains to be had.

Stacie Jacobsen: Yeah. And I love that you just used that example and the fact that you actually have to still layer on the human, uh, the human brain and the ability to think because [00:07:30] hallucinations continue to be an issue. You know, I've read one study that somewhere between three to 27% of outputs are actually entirely fabricated.

How confident are we in the output of some of the information that we're getting?

Roosevelt Bowman: Sure. It's a [00:07:45] really good question, and I think. I sort of think of it in two separate scenarios, right? In the scenario where you think about artificial intelligent models, they're algorithms. They're based on tons of data that say, you know what?

Given the range of outcomes, this is [00:08:00] probably what's gonna happen and here's your answer. So it works really well if you path or method to getting to the answer is consistent. It doesn't change over time. If it does change over time, you can end up with a really bad answer. So an investing example would be.[00:08:15]

Asking a generative AI program chat, GPT, whatever, how to build a model to predict an exchange rate, that's not going to end well. It will give you lots of flowery language about how to build the model. You will trade that model. You will lose your money. And the [00:08:30] reason for that is predicting currencies is really difficult and the process or model to predict them changes quite often every three or four years in some cases.

So that's where AI, I think can, you know, there's a downfall or a pitfall there where [00:08:45] if the person doesn't have a good base of knowledge of what they're asking about, and the process of getting the answer is variable rather than constant, then you can end up with an answer that's not terribly useful.

Stacie Jacobsen: For investors to be truly confident [00:09:00] in some of the outputs of ai.

How is it gonna get better? How are we gonna be able to take that skepticism out of it and have more confidence in the outputs?

Roosevelt Bowman: Yeah, I mean, I think there's a couple ways. The first way is really having people that are [00:09:15] open-minded about using the technology. That sounds simple, but it's not, quite frankly, there's, there's lots of fears we mentioned before about replacement.

So having people that are open-minded and understanding, even if they don't have the highest level of technical skill or maybe even low [00:09:30] technical skill. They can't integrate it into their work. I think the other part of it too is that while we're enamored with these unbelievable outputs, it's understanding and being self-aware about some of the limitations and the inherent bias, right? And I mean that in a [00:09:45] mathematical sense.

So if you're building a model or you have data that goes in that really, it's not a fact, it's incorrect, you're gonna end up with poor results. Sometimes those incorrect statements are comforting to the user, and [00:10:00] so there isn't a motivation to change the information to actually correct it.

So I think that's where the kind of next stage is, is being open-minded to using the tools. Being open and honest where some of the information the models have been built on [00:10:15] is not correct, and how do we go change it and make it, you know, really just quite frankly make it more accurate going forward.

Stacie Jacobsen: So when we talked with Lei Qiu last year, the investment landscape around AI was still really new.

It was July of 2023 and. [00:10:30] It was almost just too new to have this clearly defined perspective on some of the companies that might be the winners and the losers. As 2023 unfolded with the Mag seven, we really did see some of the companies that are starting to convert into price [00:10:45] appreciation going into 2024 and beyond.

Where do you think the winners may be?

Roosevelt Bowman: I think this first stage that we've seen is more what I would call the infrastructure. of AI. So you think about some companies that are focused on cloud computing, the [00:11:00] storing and accessing of all that data, the semiconductor producers. How do you actually run the programs?

I think that's the stage one. The stage two will be more of those companies that could be referred to before that have tons of data and are open minded [00:11:15] about changing their workforce to kind of leverage and take advantage of that. So I think that's where we are in terms of phase two, and that's a broader set of companies.

You don't necessarily have to be a tech company that's so focused on the building blocks of how do you generate a model like the data or [00:11:30] the semiconductor. You can be a, a company that just has lots of information about consumers and a willingness to use it in different ways and you can really capitalize.

I do think the other firms that can emerge as winners and Lei has certainly bought some of these in her portfolio, [00:11:45] uh, going forward is those companies that can use AI and produce products, they can be used across industries. So it's not just maybe an AI tool for car makers or some other sort of transportation, [00:12:00] it's all across, right?

So that I think is another example of companies where they can really benefit. You know, the more flexible and malleable your solution is, the more different companies across industries can adapt it. The bigger your, your kind of bottom line earnings can be. [00:12:15]

Stacie Jacobsen: To close out here, I'd really just love to get your thoughts on where you think AI is going in the near future, and then maybe at the longer term disruption.

Roosevelt Bowman: I think over the near future, there's still gonna be a heavy focus on that infrastructure of [00:12:30] AI, but as I mentioned before, I think we're moving towards the second phase, which is those companies that can actually use all that data and can have their products used across industries. I do think the other part of this too, that'll be really important for investors [00:12:45] is identifying the difference between a good idea and a product that will be profitable and a stock that's worth buying, right?

You can have a great tool, but if it's very easy to replicate it, that's not gonna generate value for [00:13:00] shareholders. And so I think that's the part that's very tricky, that's important for investors to navigate right now, that many companies are going to try to ride this buzz wave of AI, attach AI to the back of their company name but maybe don't really have this [00:13:15] competitive advantage in the space that's gonna generate shareholder value year after year.

So I think we're gonna have to look at these different companies with a much more discerning eye and not just be fooled by tagging AI to their business and their processes.

Stacie Jacobsen: All right, [00:13:30] Roosevelt, thanks so much for joining us today.

Roosevelt Bowman: Thank you, Stacie. I appreciate it.

Stacie Jacobsen: Thanks to everyone for giving us a listen for the previous episode on AI with Lei Qiu. Click the link in this episode's description, and if you enjoyed this episode, please subscribe to the Pulse by Bernstein on [00:13:45] your favorite podcast platform. I'm your host, Stacie Jacobsen.

Wishing you a great rest of the week.

The information presented and opinions expressed are solely the views of the podcast host commentator and their guest speaker(s). AllianceBernstein L.P. or its affiliates makes no representations or warranties concerning the accuracy of any data. There is no guarantee that any projection, forecast or opinion in this material will be realized. Past performance does not guarantee future results. The views expressed here may change at any time after the date of this podcast. This podcast is for informational purposes only and does not constitute investment advice. AllianceBernstein L.P. does not provide tax, legal or accounting advice. It does not take an investor’s personal investment objectives or financial situation into account; investors should discuss their individual circumstances with appropriate professionals before making any decisions. This information should not be construed as sales or marketing material or an offer or solicitation for the purchase or sale of any financial instrument, product or service sponsored by AllianceBernstein or its affiliates.

Related Insights