People checking data on a digital screen
Blog

AI in real estate investment

Lately, I keep seeing posts about people offering “AI underwriting tools.” Built with Claude Code, some Python in the backend, an AI-generated front end, and priced at €20 a month. The idea is simple: drag and drop a broker’s underwriting PDF, let AI extract the data, structure it, build a discounted cash flow, layer in assumptions, and get to an acquisition price based on Loan-To-Value and required IRR. 

What used to take analysts days now takes minutes. 

At first glance, it feels like a very logical next step. No dependency on software, no waiting, just prompt and go. Naturally, I decided to give it a try. I opened ChatGPT, found an underwriting file online, and gave it a fairly open prompt: 

“I need you to underwrite this property. Be detailed and make sure you give me the purchase price at an appropriate IRR for the risk. The IRR needs to depend on the risk so you should work it out. Borrow at 5% fixed rate with 50% LTV. Exit after 10 years. Decide what the cap rate should be. Make your own market assumptions on anything you need and state them clearly” 

Then I went for a coffee. About 3.5 minutes later, it came back with a €10.9m valuation at a 12% levered IRR, including a basic scenario analysis. Hard not to be impressed. 

But I wanted to understand what was actually happening underneath, so I asked for the Excel model. Roughly 14 minutes later, I had it. Inputs, a 10-year cash flow, KPIs, everything you would expect. Logically, I searched for the mistake. It's too good to be true. So, to answer the question, was there a mistake? 

Yes, there was. It double-counted the exit value in the IRR calculation. A fairly basic error, but also a reminder that spotting it still relies on experience.  There is a natural tendency to check the output. Not because AI is bad, but because you know how easy it is for small things to slip through. Now take that one step further. 

As a software developer, I talk to potential clients about replacing Excel with a software solution. A typical investor with 50 assets already deals with 50 Excel files. Each asset has its own model. If you want to change an assumption or adjust the modelling logic, you have to do it 50 times. Now AI comes in and generates an Excel for each asset. If something changes, you either ask AI to regenerate all 50 files or you go in and update them manually. Either way, you end up in the same place: a growing pile of Excels with no real overview. 

Each model stands on its own. Fix something in one, and there is no guarantee the next one does not have a different issue. So you review them. One by one. 

At that point, it is worth asking what actually saves time. Generating everything with AI, or working from a consistent model that you trust and apply across assets. AI is clearly changing the market. That part is obvious. But I do not think the future sits in standalone tools that generate models on demand without any structure around them. 

What makes more sense is a combination. 

Let AI handle the repetitive parts like extracting and structuring data or drafting a first version of the model. Then run that through a setup where definitions are fixed and calculations are consistent. Something where you can see what changed and decide whether it makes sense before moving forward. 

That is also how we are approaching it with Planon RE Assets. We are building an MCP server that allows AI to interact directly with the database. A user can upload structured or unstructured data, which AI extracts and transforms based on predefined data definitions. The data is then validated within Assets, and AI works through any validation errors. The user can review the data, see what has changed, and understand the impact before approving it. Once approved, the data is imported into a validated financial model, where it can be further analysed and adjusted, by using custom interfaces that are optimised for the task or by interacting with AI through natural language. 

You still move faster, but you keep oversight. In the end, speed matters, but only if you trust the output. And that hasn’t really changed. What has changed is how we get there. This hybrid way of working brings together the best of both worlds without forcing a trade-off between them. 

Share this article