Introduction
In this blog post, we’ll build a simple but powerful shell application that allows you to interact with the DeepSeek-r1:1.5b language model locally on your machine. This project combines the simplicity of Spring Shell with the power of Spring AI to create a smooth chat experience without needing to rely on cloud-based LLM services.
By the end of this tutorial, you’ll have a command-line interface that lets you chat with a powerful language model running directly on your hardware. The application will be built with Java and Spring Boot, making it cross-platform and easily extensible.
Prerequisites
Before we begin, ensure you have the following:
A computer with at least 8GB of RAM (16GB recommended)
At least 5GB of free disk space
Administrator/sudo privileges on your machine
Basic familiarity with command-line operations
-
Basic understanding of Java programming (helpful but not required)
Installing Java
Our application will be built with Java, so we first need to install a Java Development Kit (JDK). We’ll use JDK 17, which is a Long-Term Support (LTS) version.
Installing Java on Windows
Visit the Adoptium website at https://adoptium.net/
Download the latest Temurin JDK 17 installer for Windows
Run the installer and follow the on-screen instructions
Accept the default installation settings
-
After installation, verify Java is correctly installed by opening Command Prompt and typing:
java -version javac -version You should see output indicating Java 17 is installed
Installing Java on macOS
Visit the Adoptium website at https://adoptium.net/
-
Download the latest Temurin JDK 17 .pkg installer for macOS
-
Open the downloaded .pkg file and follow the installation steps
-
After installation, verify Java is correctly installed by opening Terminal and typing:
java -version javac -version You should see output indicating Java 17 is installed
Installing Java on Linux
For Ubuntu/Debian-based distributions:
sudo apt update
sudo apt install openjdk-17-jdk
java -version
javac -version
For Red Hat/Fedora-based distributions:
sudo dnf install java-17-openjdk-devel
java -version
javac -version
Installing IntelliJ IDEA
IntelliJ IDEA is a powerful IDE for Java development. We’ll use it to write and manage our project.
IntelliJ IDEA Installation on Windows
-
Visit JetBrains website at https://www.jetbrains.com/idea/download/
Download the Community Edition (free version)
Run the installer and follow the on-screen instructions
Accept the default settings or customize as needed
Launch IntelliJ IDEA after installation
IntelliJ IDEA Installation on macOS
-
Visit JetBrains website at https://www.jetbrains.com/idea/download/
Download the Community Edition .dmg file
-
Open the .dmg file and drag IntelliJ IDEA to the Applications folder
Launch IntelliJ IDEA from your Applications folder
IntelliJ IDEA Installation on Linux
-
Visit JetBrains website at https://www.jetbrains.com/idea/download/
Download the Community Edition .tar.gz file
-
Extract the archive to a location of your choice:
tar -xzf ideaIC-*.tar.gz -C /path/to/installation/directory -
Navigate to the bin directory and run the IDE:
cd /path/to/installation/directory/idea-*/bin ./idea.sh
Installing Ollama and the DeepSeek Model
Ollama is a tool that allows you to run large language models locally. We’ll use it to run the DeepSeek-r1:1.5b model.
Installing Ollama on Windows
-
Visit Ollama’s website at https://ollama.com/download
Download the Windows installer
Run the installer and follow the on-screen instructions
-
After installation, Ollama will run as a background service
Installing Ollama on macOS
-
Visit Ollama’s website at https://ollama.com/download
Download the macOS .dmg file
-
Open the .dmg file and drag Ollama to the Applications folder
Launch Ollama from your Applications folder
Installing Ollama on Linux
Run the following command in your terminal:
curl -fsSL https://ollama.com/install.sh | sh
Pulling and Testing the DeepSeek-r1:1.5b Model
Now that Ollama is installed, we need to download the DeepSeek-r1:1.5b model:
ollama pull deepseek-r1:1.5b
This will download the model, which is approximately 3-4GB in size. Once downloaded, we can test it directly through Ollama’s CLI:
ollama run deepseek-r1:1.5b
You’ll enter a chat session with the model. Try typing a simple prompt like:
> Hello, can you introduce yourself?
The model should respond with an introduction. To exit the chat session, simply type:
> /bye
Creating a Spring Boot Project
Now that we have our environment set up, let’s create our Spring Boot project.
Bootstrapping with Spring Initializr
-
Open your web browser and navigate to https://start.spring.io/
-
Configure your project with the following settings:
Project: Maven
Language: Java
Spring Boot: 3.2.x (or the latest stable version)
-
Project Metadata:
Group: com.example (or your preferred group ID)
Artifact: deepseekchat
Name: deepseekchat
-
Description: Spring Boot Shell application for DeepSeek chat
Package name: com.example.deepseekchat
Packaging: Jar
Java: 17
-
Add the following dependencies:
Spring Shell
Ollama
Click "Generate" to download the project as a ZIP file
Extract the ZIP file to a location of your choice
Importing the Project into IntelliJ IDEA
Open IntelliJ IDEA
Select "Open" or "Import Project"
Navigate to the extracted project folder and select it
Wait for IntelliJ to import and index the project
Implementing the Application
Now, let’s implement our chat application with Spring Boot and Spring Shell.
Main Application Class
First, let’s examine the main application class:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.shell.command.annotation.CommandScan;
@SpringBootApplication
@CommandScan
public class DeepSeekChatApplication {
public static void main(String[] args) {
SpringApplication.run(DeepSeekChatApplication.class, args);
}
}
Let’s break down this code:
-
@SpringBootApplication: This is a convenience annotation that adds all of the following:-
@Configuration: Tags the class as a source of bean definitions -
@EnableAutoConfiguration: Tells Spring Boot to start adding beans based on classpath settings -
@ComponentScan: Tells Spring to look for components in the application’s package
-
-
@CommandScan: This annotation is from Spring Shell. It tells Spring to scan for command classes that can be used in the shell interface. -
mainmethod: The standard entry point for a Java application. It callsSpringApplication.run()to bootstrap our application.
Chat Command Class
Next, let’s implement the command that will handle chat interactions:
import org.springframework.ai.chat.client.ChatClient;
import org.springframework.shell.command.annotation.Command;
@Command
public class ChatCommand {
private final ChatClient chatClient;
public ChatCommand(ChatClient.Builder builder) {
this.chatClient = builder.build();
}
@Command(alias = "chat", description = "Chat with DeepSeek!")
public String chat(String message) {
return this.chatClient.prompt(message).call().content();
}
}
Let’s break down this code:
-
@Command: This class-level annotation marks this class as a command provider for Spring Shell. -
ChatClient: This is from the Spring AI library and provides an interface to communicate with the LLM. -
Constructor: We use constructor injection to get a
ChatClient.Builder, which we then use to build ourChatClient. -
@Command(alias = "chat", description = "Chat with DeepSeek!"): This method-level annotation defines a command that can be invoked in the shell. The alias is what users will type to execute the command. -
chatmethod: This method takes a message parameter from the user, sends it to the LLM through the chat client, and returns the response content.
Application Properties
Finally, let’s configure the application through the
application.properties file:
spring.application.name=deepseekchat
spring.ai.ollama.chat.model=deepseek-r1:1.5b
spring.main.web-application-type=none
spring.shell.interactive.enabled=true
Let’s break down these properties:
-
spring.application.name: Sets the name of our application. -
spring.ai.ollama.chat.model: Specifies which model to use with Ollama. We’re using the DeepSeek-r1:1.5b model that we pulled earlier. -
spring.main.web-application-type=none: This tells Spring Boot not to start a web server, as we’re building a console application. -
spring.shell.interactive.enabled=true: Enables Spring Shell’s interactive mode.
Running the Application
Now that we’ve implemented our application, let’s run it.
Running from IntelliJ IDEA
Make sure Ollama is running in the background
-
In IntelliJ IDEA, find the main class (
DeepSeekChatApplication) -
Right-click on it and select "Run DeepSeekChatApplication"
-
After the application starts, you should see a shell prompt
-
Try chatting with the model by typing:
shell:>chat "What is the capital of France?" The model should respond with information about Paris
-
NOTE: The model will always take a moment to respond because it is resource heavy to run a LLM on your local machine.
Running as a JAR
Alternatively, you can build and run the application as a JAR file:
# Navigate to your project directory
cd path/to/deepseekchat
# Build the project with Maven
./mvnw clean package
# Run the JAR file
java -jar target/deepseekchat-0.0.1-SNAPSHOT.jar
Understanding the Application Flow
Let’s understand how our application works:
-
When the application starts, Spring Boot initializes the application context and scans for components.
-
Spring Shell is configured and starts its interactive command line interface.
-
The
ChatCommandclass is detected and registered as a command provider. -
When a user types a chat command, Spring Shell parses the input and invokes the corresponding method in our
ChatCommandclass. -
The message is sent to the DeepSeek model through Ollama using the Spring AI client.
-
The response from the model is returned to the user in the shell.
Extending the Application
Now that we have a basic chat application, here are some ways you could extend it:
Add command history persistence
Implement conversation memory to maintain context
-
Add support for system prompts to guide the model’s behavior
-
Create commands for different types of interactions (summarize, translate, etc.)
Add support for switching between different models
Implement streaming responses for faster feedback
Conclusion
In this blog post, we’ve built a simple yet powerful shell application that allows you to chat with the DeepSeek-r1:1.5b language model locally using Spring Boot, Spring Shell, and Spring AI. This approach gives you the privacy and control of running the model on your own hardware while still leveraging the simplicity and power of the Spring ecosystem.
The combination of Ollama for local model hosting and Spring’s comprehensive framework makes it easy to build sophisticated AI applications without depending on cloud services or complex infrastructure. As language models continue to evolve, having the skills to integrate them into your local applications will become increasingly valuable.
Happy coding and chatting!