Install an AI LLM on Your Computer: A Step-by-Step Guide
This week, I’ll be hosting my monthly A.CRE Ask Me Anything (AMA) where we dive into various technical topics, and AI in CRE often comes up. Last month, someone asked me to show the group how to install a Large Language Model (LLM) on their computer—essentially a personal ChatGPT that runs locally. So, I figured it would be a great topic to cover during this month’s AMA.
Many people assume you need cloud infrastructure, but running an LLM locally (i.e. on your own personal computer) is easier than you might think assuming you have the appropriate specifications. In this post, I’ll show two simple methods for doing this—one using Ollama and the second using Jan.AI—and provide short videos to walk you through each setup step by step.
- We continue to explore here at A.CRE how artificial intelligence is impacting commercial real estate. Check out our growing collection of AI use cases, AI training, AI tools, and custom GPTs for real estate.
Why Run an LLM Locally?
Running a Large Language Model (LLM) on your own computer comes with a host of benefits that make it an attractive option for both tech and non-tech real estate professionals alike. Here are a few reasons why setting up an LLM locally might be the perfect move for you:
- Data Privacy: In CRE, we’re often dealing with sensitive information like property details, client data, and financial reports. Running an LLM locally ensures all your data stays on your device. No need to worry about sending confidential information to external servers. It’s peace of mind knowing your proprietary data remains secure and under your control.
- No Internet Dependency: Whether you’re in the office with limited Wi-Fi or out touring a property with no internet access, an LLM installed on your computer remains accessible. You can use it anytime without relying on a stable internet connection, ensuring uninterrupted workflow.
- Cost Efficiency: Cloud-based LLM services (e.g. ChatGPT, Gemini, Claude.ai, etc.) generally have a recurring cost for the premium versions of their models. An open-source large language model like Meta’s LLama costs you nothing but the time to install the model, assuming of course you have decent enough hardware to run the model (see specs below).
- Latency: Enjoy faster response times since everything runs locally.
Minimum Specifications
Before we jump in, it’s important to ensure that your system meets the necessary requirements. Meeting these minimums will help ensure you’re able to install and run open source LLMs on your computer:
- Processor: At least an Intel i5 or an equivalent processor is recommended to handle the computational demands of running an LLM efficiently.
- RAM: A minimum of 16 GB of RAM is needed for smooth performance, especially when working with larger models or multitasking.
- Storage: Ensure you have at least 10 GB of free storage space. The exact amount may vary depending on the size of the LLM you plan to install.
- Operating System: Compatible with Windows 10 or later, macOS 11 or newer, or a Linux distribution. Make sure your OS is up to date to avoid any compatibility issues.
- GPU (Optional): While not mandatory, having a dedicated GPU like an NVIDIA RTX 3060 or better can significantly speed up processing times and enhance overall performance, especially for more demanding tasks.
Method 1: Installing and Running an LLM Using Ollama
What is Ollama?
Ollama is a free and open-source tool that allows you to run various open-source Large Language Models (LLMs) directly on your computer. Whether you need an LLM like Codestral for generating code or LLaMa 3 as an alternative to ChatGPT, Ollama makes it possible to operate these models locally. It supports multiple operating systems, including Linux (System-powered distributions), Windows, and macOS (Apple Silicon).
Watch How to Install an LLM Locally Using Ollama
Key Steps for Installation and Use (Ollama)
Setting up Ollama to run an LLM on your computer is straightforward. Follow these steps to get started:
- Go to ollama.com. Start by visiting the Ollama website. This is where you’ll find all the resources you need to download and install the tool.
- Download and Install. On the homepage, look for the download button. Choose the installer that matches your operating system—whether you’re using Linux, macOS, or Windows. Download the installer and run it, following the on-screen instructions to complete the installation process.
- Open a Command Line on Your Machine. Once Ollama is installed nothing will seemingly happen but Ollama will be running in the background. Open your command line interface (CLI). This could be Terminal on macOS and Linux or Command Prompt/PowerShell on Windows. The CLI is where you’ll enter the commands to manage your LLMs.
- Go Back to ollama.com and Click ‘Models’. Head back to the Ollama website and navigate to the ‘Models’ section. Here, you’ll find a list of available LLMs that you can run locally.
- Select a Model from the List, Such as LLaMa 3.1. Browse through the available models and choose one that fits your needs. For example, you might select LLaMa 3.1 for a ChatGPT alternative or another model tailored to your specific requirements in commercial real estate.
- Copy the Command That Starts with ‘ollama run…’. Once you’ve selected a model, you’ll find a command to run it in the upper-right hand corner of the model page. It will look something like ‘ollama run llama3.1‘. Copy this command to your clipboard.
- Paste the Command into the Command Line on Your Computer. Go back to your CLI and paste the copied command. Press Enter to execute it. This command tells Ollama to download and set up the chosen LLM on your machine.
- Wait for the LLM to Download and Install. The installation process will begin, and you’ll see progress indicators in the CLI. Depending on the size of the model, your internet speed, and your hardware specifications this might take a few minutes to several hours. Once the download is complete, Ollama will automatically install the model.
- Prompt the LLM from the Command Line and See the Response. After installation, you can start interacting with your local LLM directly from the CLI. Simply type your query or command, similar to how you would interact with ChatGPT, and press Enter. The model will process your input and provide a response right in your terminal.
Method 2: Installing and Running an LLM Using Jan.AI
What is Jan.AI?
Jan.AI is an open-source tool developed by Homebrew Computer Company, designed to provide a ChatGPT alternative that runs entirely offline. It supports multiple operating systems, including Windows, macOS, and Linux, making it accessible to a wide range of users. Jan.AI operates through a graphical interface, which includes a pre-built chat window for interacting with the language model, making it more user-friendly compared to command-line-based tools like Ollama.
In addition to the core functionality, Jan.AI offers extensions such as Cortex, a library that allows for running LLMs locally with added flexibility. The platform emphasizes user ownership and privacy, ensuring that all data remains on your device without being sent to external servers. Jan.AI is maintained by a team of AI researchers and engineers and is supported by a community of users who contribute to its ongoing development and improvement. The project follows open-source principles, allowing users to customize and extend the tool to better fit their specific needs.
Watch How to Install an LLM Locally Using Jan.AI
Key Steps for Installation and Use (Jan.AI):
Setting up Jan.AI to run an LLM on your computer is simple. Follow these steps to get everything up and running:
- Go to Jan.com. Start by visiting the Jan.AI website to access the download section and get the installer for your operating system.
- Download and Install. Download the appropriate installer for your system—whether you’re using Windows, macOS, or Linux. Run the installer and follow the on-screen instructions to complete the installation process.
- Open the Jan Interface. Once installed, launch the Jan.AI application. When the interface opens, locate the ‘Model’ tab in the upper-right corner of the window.
- Select a Model. In the ‘Model’ tab, choose the model you want to use from the dropdown menu, such as LLaMa 3.1. Click the blue Download/Install button to begin downloading the selected LLM.
- Wait for the Model to Download and Install. The download and installation process can take anywhere from a few minutes to several hours, depending on the model size and your internet speed. Be patient while Jan.AI sets everything up.
- Enable GPU Acceleration (If Available). While the model is downloading, go to Settings > Advanced Settings and turn on GPU Acceleration if your system supports it. This can significantly speed up the model’s performance.
- Install the Cortex Extension. Next, navigate to Settings > Cortex and install the Cortex extension. This extension ensures the tool works properly.
- Start Chatting with the LLM. Once the model is fully installed, use the Jan.AI interface to start interacting with your local LLM. Simply type your queries into the chat window and receive responses just like you would with any other AI assistant.
Choosing Ollama vs Jan.AI
When it comes to running an LLM locally, both Ollama and Jan.AI have their strengths. Here’s a quick comparison to help you decide which one fits your needs best:
Ollama
Ollama is a solid choice if you’re comfortable using the command line. It’s lightweight, which means it runs efficiently without hogging your system resources. While it’s geared more towards tech-savvy users, the installation and setup process is straightforward. If you prefer having direct control and don’t mind working in a terminal, Ollama is a great option.
Choose Ollama if:
- You’re comfortable with command-line tools.
- You prefer a lightweight solution.
- You want a straightforward setup without additional features.
Jan.AI
On the other hand, Jan.AI is designed to be more user-friendly, featuring a pre-built chat window that makes interacting with the LLM feel more like using a standard application. However, Jan.AI does require installing some additional components to get everything up and running. The upside is that it offers various extensions, allowing you to expand its capabilities as needed. If you’re looking for a more out-of-the-box experience with added flexibility through extensions, Jan.AI might be the better fit.
Choose Jan.AI if:
- You prefer a graphical interface with a pre-built chat window.
- You’re looking for a more user-friendly experience.
- You want the ability to expand functionality with extensions.
Conclusion
Running a Large Language Model (LLM) locally on your computer is a practical option for commercial real estate professionals looking to enhance their workflows. With Ollama, you benefit from a lightweight, command-line tool that offers direct control and efficient performance, ideal if you’re comfortable with technical setups. Alternatively, Jan.AI provides a more user-friendly experience with its graphical interface and pre-built chat window, along with the flexibility to expand its capabilities through various extensions.
Both methods offer significant advantages like improved data privacy, offline accessibility, customization, cost savings, and faster response times. Depending on your technical comfort and specific needs, either Ollama or Jan.AI can help streamline tasks such as generating reports, analyzing market trends, and managing client communications. Give both a try to see which one fits your workflow best and join this month’s AMA for more real estate technical tips.