,

Generative AI – The New Frontier in CRE | S5E4

In this episode of the Adventures in CRE Audio Series, Spencer Burton and Michael Belasco dive deep into the world of Generative AI and its implications for commercial real estate (CRE). Spencer explains what generative AI is, how it differs from traditional AI, and its growing role in improving workflows across industries.

They also discuss the practical ways professionals can use generative AI to enhance productivity—whether it’s brainstorming, writing emails, or managing data-driven tasks. Along the way, Spencer shares insights into the limitations of generative AI, including hallucinations, and how to work around these challenges.

This episode is packed with insights on the new frontier of AI in CRE. Don’t miss it!


Generative AI – The New Frontier in CRE

Or Listen to this Episode



Resources from this Episode

Episode Transcript

Michael Belasco (00:08):

All right. Welcome back to another episode of the A.CRE Audio Series season five. This is actually… I’m most excited about this season honestly just because of where we have gone in the world of technology and how Spencer is actually leading in a lot of cool initiatives.

(00:28):

So we’re going to spend a lot of episodes really diving in both for educational purposes for our audience, and also to give a little bit more insight into how Spencer thinks about things in this space. So this episode, we’re really going to focus in on generative AI and what it is.

(00:44):

And so let me kick off with the first question to you, Spencer. So what is generative AI? How does it differ from other AI technologies currently used in commercial real estate?

Spencer Burton (00:55):

Yeah, so context, if you haven’t watched some of the other episodes, the reason why I’m wearing a hat, okay, is-

Michael Belasco (01:05):

Not because he’s balding.

Spencer Burton (01:06):

… not because I’m balding, because I am a new tech startup founder, and it’s only appropriate that I am wearing a hat. So that is the reason for that. And it is an AI… It’s a vertical AI Agentic platform is the startup that I’m doing. That is why I’m wearing a hat.

Michael Belasco (01:24):

He talked me out of my suit today, by the way. He’s like, “You’re not wearing that suit.” So, I had to change.

Spencer Burton (01:30):

But we’re talking GenAI in this episode, we’ll get into some of the other really cool areas of artificial intelligence that are impacting the industry. But the question like, what is it?

Michael Belasco (01:43):

Yeah, give us a broad… We’re grounded here at the foundations. Give us a…

Spencer Burton (01:49):

So without getting too technical, it’s the ability for models to generate something new. Okay? Now many would hear that and go, “Well, it’s actually not new. It is a replica of its knowledge base and therefore it’s a copy of other things that exist out there.” But if you think about it, which by the way isn’t necessarily wrong, and a lot of times it’s a poor copy.

(02:21):

You see a fair… The images, the generative images these days, the diffusion models, they produce an image that is clearly AI-created and some are better than others or whatever. And in the language, when you haven’t instructed your large language model in such a way to produce texts that sounds like you, it has a really boring output oftentimes.

Michael Belasco (02:44):

Yeah. It can be inaccurate too in a way that-

Spencer Burton (02:47):

Well, and that’s a whole other issue, is it generates something based on its knowledge base. If its knowledge is outdated or its knowledge is incomplete or wrong, and that’s where we get into RAG and feeding it a knowledge base that you know is accurate, and that’s a whole other conversation that we can have.

(03:01):

But the point is, yeah, generative AI is using… There’s a machine learning component to it. I won’t get again into the technical point, but it essentially models to produce something new. But then again, how new is it? The question I would ask you though, if you tomorrow, Michael, were to write a song, would that song though not be in part-

Michael Belasco (03:26):

A derivative.

Spencer Burton (03:27):

… a derivative of your inspiration?

Michael Belasco (03:30):

Absolutely. Absolutely. I listen to some of my favorite songs… Everybody, any song you listen to, it is derived from something. And even if it’s in part or a component, which I guess is your point is you don’t learn to play or you don’t learn anything in a vacuum. So everything is that concept building on the shoulders of giants. Well, in this case, the giants may be data or something in this example.

Spencer Burton (03:53):

Or literal giants, not literal giants, but figurative giants, intellectual giants in our society who have created thoughts perhaps entirely original thoughts, and then new thoughts are built on their shoulders. So that’s generative AI. Now, why it matters is because prior to generative AI, AI was very prescriptive, very linear. I don’t know how else to say it. It was such that the outputs were too predictable to be a value in areas like knowledge work.

Michael Belasco (04:34):

Meaning it was a basic function. Sure, you’re going to get there, but we already know what it is. So give us some… Bring that down to reality I guess in terms of an example or…

Spencer Burton (04:43):

Well, an example… So the new world, the example is a large language model will produce original text that could be a value to us. A image diffusion model could produce an original image, even if it’s a derivative of others.

Michael Belasco (05:04):

Yeah, I’m talking about the primitive in terms of the primitive AI, is this just stuff when this technology was nascent, it’s basically… Where does the term AI drop off, I guess is what I’m trying to ask you?

Spencer Burton (05:21):

Yeah, so AI, artificial intelligence, early intelligence was fairly… Early artificial intelligence was caveman-like.

Michael Belasco (05:34):

Yeah. So a calculator’s artificial intelligence?

Spencer Burton (05:37):

I guess, I don’t know if technically that’s the definition, but effectively that’s the definition. But the point coming back to this conversation around generative AI. Yes, it can produce original things and those original things can be valuable to us in a way that previous AI was verifiable

Michael Belasco (05:57):

And most of us experience that day-to-day. Now, I mean, for those that are utilizing just the basic ChatGPT, it’s become a regular. All right, so give us specific examples now because it’s tied into commercial real estate, right?

Spencer Burton (06:11):

Yeah.

Michael Belasco (06:11):

So give us a specific examples of generative AI tools that are revolutionizing the space, like leasing, acquisitions, property management. Give us some examples.

Spencer Burton (06:23):

Yeah, I’ll be the first to say that there is not a tool right now that I would say is revolutionizing commercial real estate because it’s early.

Michael Belasco (06:33):

Okay.

Spencer Burton (06:33):

Okay? There are tools that have the very real potential to change how we do work in commercial real estate, where our focus as people in commercial real estate goes and the operating expense, the expense side of our business, or in other words, greatly make our business more efficient, do more with less.

(07:06):

Broader though, there are tools that are absolutely revolutionizing the world and therefore in a way revolutionizing commercial real estate. And so obviously ChatGPT or interfaces to large language models, Claude’s got a really good one, Claude Anthropic. Google’s Gemini, those are large language models that we can interact with that make us faster and better. Okay. Very real example is here we’re sitting, there’s my Google AI which is old AI.

Michael Belasco (07:41):

Every time you say A.CRE.

Spencer Burton (07:44):

That is caveman AI back here. If you didn’t hear, I guess I said Google and my Google Assistant chimed in.

Michael Belasco (07:53):

Well, I don’t know if you know when you said A.CRE Siri popped up here on the screen. Yes, which happens all the time.

Spencer Burton (08:02):

Okay, I need to get to the point. What are the tools? For you right now you need to be using one of the large language models and use an interface like a ChatGPT or a Claude, they make you better and faster. And whether it’s writing emails, whether it’s creating outlines for podcast episodes, whether it’s any number of things, researching, brainstorming. Last season we talked about this, so I won’t get much more into it.

Michael Belasco (08:27):

Well, let me add a caveat, and I think this is really important. You cannot trust generative AI point-blank. If you’re taking this to use it, and I’ve had two experiences recently. One is we were using generative AI to look at lease comps, and someone had sent me all the comps and I went… And they said they did it through taking photos of maps and all this stuff. So I got it and everything was wrong. So you got to check.

(08:57):

Another time I was doing research and it gave me a data point and I said, “Which document did you get this from?” And it gave me the document, it said it was in there. I went and looked at the document myself because I was suspicious. It said a 90% whatever it was, and I controlled F, I looked for the 90% number, wasn’t there. I said, “Please read the document and tell me exactly which page you found it on.”

(09:24):

It was a long conversation and it said, “You’re right, it’s actually not in here. Would you like me to revise what I had…” So you need to be very careful. So it’s on its way. And this is the point I think we had briefly talked about, which is the difference between, I believe it’s reasoning LLMs, something like that, which-

Spencer Burton (09:42):

Yeah, well, so this concern around hallucinations is real. Hallucinations when a large language model produces something that’s not real. You see it in AI-generated images where there’s six fingers or cars in landscaping or whatever. Yeah, that’s a concern. Now there’s a reason why those hallucinations happen. As you become more adept at using these tools, you’ll learn how to avoid them.

(10:13):

So the quality… These are models. Most of everyone who’s listening and watching this understand real estate financial modeling. The quality of your outputs is a function of the quality of your inputs. In the case of a large language model, your input is your prompt and perhaps data that you provide to it. It has a limited context window, which think of it as memory, like your own memory.

(10:42):

So if I right now to you, Michael gave you a 100-page PDF and you read it. Let’s say I gave you time to read it, and then I said, “What is X in this paper?” The difference between you and the model is you have enough wherewithal to say, “I don’t remember.” The model, this is our diligent little LLM does not want to let you down, and it’s going to answer your question.

Michael Belasco (11:08):

Yeah.

Spencer Burton (11:08):

Now, if I gave you a one-page PDF and I’d let you read it, and then I ask you that question, you almost certainly will be able to give me the right answer. And it’s the same with these models. If you gave that, that example you gave, had you better understood its context window and fed it a prompt and fed it data that you knew it would digest, the likelihood of that happening is much, much lower.

Michael Belasco (11:37):

So understanding the limitations and how to speak to it is critical. I mean, ChatGPT came out with these folders and you did a post about it. And I was looking at a market, or I was looking at a broad strategy and I dumped all this data in there. I created a big folder and I’m just like… This is the thing and a lot of people just, “Now ChatGPT’s got all the information it needs, and now it’s going to answer all my questions.” But thankfully I was critical because I’m making investment decisions based on this. And you really got to understand first how to work with it, I think, which is something that you have… And you’ve posted a ton of content on that.

Spencer Burton (12:16):

Yeah. I’ve said this before, but I’m going to say it again because this is really important. We need to all stop pretending like the technology is perfect and start thinking about especially this technology as say a really smart analyst or intern that joins your team. And if I give that analyst or intern a task, I’m not going to assume that he or she completed it correctly.

(12:44):

I’m going to appreciate their work and they’re going to save me time because they got me to a certain point, but they’ll feed me a deliverable and I will vet and validate the deliverable. I may then go back to them and say, “Hey, this is wrong. Let’s work on this,” or, “This portion is missing.” And if we think about this technology in that way, we’ll throw our arms up less when you get the wrong answer.

(13:10):

And instead, we’ll treat it in the way that it should be treated, which is providing it proper instructions, providing it guidance, giving it instruction in pieces that it can fully digest and not expecting it… And I didn’t mean that comment to be a lecture to anyone. I’m just saying we all expect technology to be perfect, but we don’t expect each other to be perfect. And so why should we expect the technology to be perfect?

Michael Belasco (13:38):

Yeah, so really understanding its limitations. You cannot just-

Spencer Burton (13:43):

Yeah.

Michael Belasco (13:43):

And this is stuff… I mean, everybody… You’re still learning all the… I mean, you’re well in-

Spencer Burton (13:48):

Oh, this is the nascent, nascent technology, right? I mean, it changes every week.

Michael Belasco (13:53):

Yeah. So let’s bring this back really to how you utilize it, you specifically, and maybe stuff that other people aren’t really thinking about today because you use it in ways… You know its limitations, right?

Spencer Burton (14:08):

Mm-hmm.

Michael Belasco (14:08):

And I’m putting you on the spot a little bit here.

Spencer Burton (14:10):

That’s okay.

Michael Belasco (14:11):

But ways where others might not be thinking about how they’re using, again on the spot.

Spencer Burton (14:18):

Well, so when I said this technology, when I say this technology, generative AI is not revolutionizing real estate because the complexity of our industry, the nuance in the workflows that we do on a day-to-day basis, and most importantly, the data integrations or the tools integrations that are so key to the workflows that we perform.

(14:42):

For instance, many of us use Yardi in order to perform tasks. And if this tool does not have access to my Yardi data it can’t perform any task or support in most of the tasks related to Yardi. Now, I can do… There’s hacks, there’s workarounds that I can do. But the point is that’s why you don’t see it yet, revolution…

(15:04):

And by the way, is one of the reasons why when we’ll get to CRE agents, why I’m so passionate about that, is because I do see a path to revolutionizing workflows in real estate but it requires certain things that you don’t get out of the current tools.

Michael Belasco (15:18):

And even if let’s say your basic generative AI platform, your LLM had access to Yardi, it still wouldn’t really be effective. And the same way you just talked about the limitations. Now there are strategies and why CRE agents is such a necessity almost in the industry now that this technology is available because it’s not just simply… As much as it sounds, “Oh, you plug in your generative AI into your-”

Spencer Burton (15:45):

And magically it’s going to be able to perform.

Michael Belasco (15:47):

There’s a lot of work that goes in. You’ve shown me sort of the background of what’s [inaudible 00:15:53]

Spencer Burton (15:52):

Well, imagine a brand new analyst coming out of Michigan shows up at your office. Day one, you gave it a Yardi login and you said, “Do this quarterly evaluation for me.” That analyst would be like, “Okay, I’ve never logged into Yardi. I don’t know what a quarterly evaluation is. Let alone how to perform one.”

Michael Belasco (16:19):

Of all the capabilities in the world to do it. Just there’s…

Spencer Burton (16:21):

Yeah, all of the will, but it doesn’t have a way. And so… Anyway, we’ll get to CRE agents, but that’s the unlock. Now, to answer your question because I don’t want to leave everyone… Okay, I use generative AI myself personally, every single day. Multiple times a day, some days I’m constantly using it. Most of it though, I think are for more generalist tasks.

(16:44):

All of my writing, generative AI plays a part. Now, if you read my writing, you’d never think that there was an LLM involved, but the LLM is really powerful in helping brainstorm. So I might put down points, and I usually actually start by say, “Hey, I want to write about this.”

(17:02):

By the way, writing, it may be blog post related A.CRE but more often it’s writing a memo for work, writing an email to someone putting together… I’m in startup mode right now. So I’m developing thoughts around strategy. I’m building TAM analyses. I’m writing arguments for why this is a good thing to do. And I start with my thoughts. I put them down, I prompt the LLM. I use this RODES Framework, which you can go to A.CRE and you can see what the RODES Framework, R-O-D-E-S.

(17:37):

But I use the RODES framework, and I begin by simply introducing it as a companion to this task. I share my initial thoughts, and then I might say things like, “What are two or three other thought…” Let’s use an example. So let’s imagine, this isn’t a great example, but I might say something like, “I’m looking at investing in South Africa. What are the five biggest cities?”

(18:02):

Okay, that’s a very, very simple thing. I could go to Google and figure the same thing out. On a brainstorming task. What I might say is, “Here are two ideas I have for something. Give me three more.” And it would then produce three more, and I might say, “Give me five more.”

Michael Belasco (18:25):

Or even give me reasons why this would be effective or why it wouldn’t be, or.

Spencer Burton (18:29):

Yeah. And then you could talk through each one and it becomes like a brainstorming pal. And then I might say, “Okay, write an introduction about this.” And then I would take that introduction. Then I would rewrite it in my own words. But I now have a template to get started.

Michael Belasco (18:43):

Yeah.

Spencer Burton (18:43):

And that tool speeds my writing up three, four-fold.

Michael Belasco (18:49):

We had the short-lived weekly meeting with you, the whole team, and you would teach us the latest of what’s going on with generative AI, ChatGPT. And there was one section you had mentioned. You write emails and it helps write emails, but it knows your voice. It knows how to sound like Spencer Burton, and that’s a thing, you can train the AI. So how do you do that? That’s a unique thing. I think it’s very helpful for a lot of people to be able… many people probably don’t know.

Spencer Burton (19:17):

So I have a ChatGPT license, and I have a Claude license. Those are the two I use. I use Claude for coding and I use Claude for long writing.

Michael Belasco (19:30):

Interesting. As opposed to ChatGPT?

Spencer Burton (19:33):

Yeah. I use ChatGPT for custom GPTs. Now, Claude has their Projects concept. I just got adept at custom GPTs. I can’t speak to whether Claude’s might or might or may not be better. ChatGPT also released a Projects concept. That’s the folders that you were describing, which I actually quite like. I have a blog post up where I talk about the difference between the two and where I use one versus the other. And so I use those both.

(19:59):

Now on custom GPTs, what’s nice about them is it’s essentially you’re setting up your large language model for a certain type of interaction. Now, you could do that without a custom GPT, but every time you would open up a new ChatGPT conversation, you would have to feed it with all of those instructions and perhaps data in order to even just begin the conversation. So a custom GPT allows you to start at a point that’s much later.

(20:29):

And so to this Spencer Burton’s Ghost Rider GPT, what I did is I have tens of thousands of emails. I have hundreds of blog posts, I have thousands of LinkedIn messages and so forth, and I grabbed all those. And the very first step, basically I fed it into this engine that turned that data into a…

(20:52):

And there’s a tool that I would love to share with the audience. Maybe I’ll find it and include it in the show notes, but it’s a tool that basically indexes that data in such a way that the model can digest it better than it could if you just gave it raw.

Michael Belasco (21:07):

So like that mountain of data, which I was going to ask you about, because if it only has its limitation now it’s the same sort of content. So I guess it would probably make sense.

Spencer Burton (21:15):

Well, that was the first step. And then I said, “I want to teach…” So I just started a random conversation. I took this data, I fed it into this tool that turned this into a more digestible way for a large language model to use that data. And then I fed it into a conversation and I said, “I want to teach a ghostwriter to write like me. Can you help me? What would be the five components that I would need to teach my ghostwriter so that the ghostwriter writes like me?”

(21:48):

Because I don’t even know how to teach a ghostwriter. So the first point is how do I teach a ghostwriter? I don’t remember exactly what the five points were, but they gave me some five points. I said, “Okay, that’s interesting. I don’t like this fourth point. What would be three other points?” And it gave me three other points. I’m like, “Okay, let’s go with one, two, and three.” So it might’ve gave me 10 points. I ended up using three.

(22:08):

And then I would feed it the data and I’d say, “Okay, based on what you read, how would you teach the ghostwriter on point one?” And then it produced that for point one. “How would you do it for point two?” So then what it did is it essentially produced a training manual for a ghostwriter. Then I say, “Okay, so if I have a ghostwriter, I want to show them examples of how I write a email introduction.”

(22:37):

So oftentimes I’m introducing one party to another, and I say, “Based on my emails, how do I write email introductions?” And it produced that. And that became an example in the RODES framework, the E is example. To the extent that you can help your large language model know how you want it to look, you give them an example.

(22:59):

And so it then produced example for all these ways that I write, blog posts, emails, email introductions, LinkedIn messages, LinkedIn posts, all these different use cases. And it produced examples of each one of those. And then it produced… And then I worked with it to produce the instructions. It ultimately became the instructions for my custom GPT. I also provided the custom GPT, each one of the examples.

(23:24):

I did not provide it the raw file because that would’ve been useless. It was way too much for the custom GPT. But in essence, I created a training manual that the custom GPT uses to write kind of like… and I say kind of like me because I always end up editing it, but it is two or three steps further along than if I used a untrained… Even though it’s not exactly training in the traditional sense of model, but a non-properly instructed LLM instance in order to write like me.

Michael Belasco (23:57):

That’s amazing. That’s amazing. Let me back. I have one more, and I know this one’s running probably a little over, but you had mentioned, Claude, ChatGPT, there’s these other platforms out there and they have their strengths and weaknesses, correct? Is there a reason at this point… Are you just exploring right now or are there reasons to use one over the other for different use cases?

Spencer Burton (24:19):

Yeah. Oh yeah, of course. So first off, there are now hundreds of large language models, and ChatGPT is not a large language model. ChatGPT is the interface through which you access OpenAI’s large language models, and they have numerous large language models that they’ve built.

(24:38):

Google, Gemini is the interface through which you access Google’s large language models. Claude, same thing. And so there are these websites where you can go and access large language models. Some of them you can download and install locally.

Michael Belasco (25:03):

And are they all geared towards different specialties or how…

Spencer Burton (25:08):

Yes and no. So there’s a training process and then there’s a fine-tuning process. And the fine-tuning… First off, the nature of the training, the amount of data that it’s trained on will determine its raw capability, and then the fine-tuning will improve it for certain uses.

(25:30):

And so yes, the fine-tuning will generally make one LLM better than another at certain things.

Michael Belasco (25:39):

Yeah, got you.

Spencer Burton (25:40):

We’re not at a point where I would say, “Oh yeah, this model here is the right model for our industry,” because all of them have their limitations and their strengths and what most of us do… And in most cases, you won’t notice much difference from one to the next until you get into the more technical conversations, which quite frankly are where you get the greatest value though. But yes, there’s absolutely-

Michael Belasco (26:08):

Amazing. All right, well we have a lot more to talk about through this series, just about everything you’re doing, all the expertise you have in this space. So again, this is probably my favorite audio series we’ve done just because we’re like trailblazing here with Spencer. So let’s move on to the next one.

Spencer Burton (26:25):

Go to the next.

Michael Belasco (26:27):

All right.

Announcer (26:29):

Thanks for tuning into this episode of the Adventures in CRE Audio Series. For show notes and additional resources, head over to www.adventuresincre.com/audioseries.