Lorenzo Siena's
portfolio

Work in progress πŸ‘¨πŸ»β€πŸ’»

web

HRMS β€” Django with Python

HRMS Screenshot

A Human Resources management system developed in Django

A web application for the efficient management of personnel in small and medium-sized enterprises.
The system centralizes and automates key HR processes, including employee management, attendance monitoring, leave management, and payroll processing.
It has a login, a registration system, authentication and also offers advanced features such as performance report generation and a notification system for corporate communications.

Technologies used:

  • Backend: Django
  • Frontend: HTML, CSS, JavaScript, and Bootstrap
  • Database: SQLite
  • Authentication: Django Authentication

Main Features:

  • Complete employee management.
  • Request and acceptance system for attendance and leave.
  • Payroll management with reserved access.
  • Generation of detailed reports and statistics.
Go to project
AI
RAG

A simple Wikipedia RAG Application

Wikipedia RAG Application

Web application with Streamlit and LlamaIndex

A simple web application created with Streamlit that demonstrates the functioning of RAG (Retrieval Augmented Generation). The app uses LlamaIndex to load and query content from Wikipedia pages related to Artificial Intelligence and Machine Learning, providing answers based on the retrieved context.

Technologies used:

  • Streamlit: For the interactive user interface.
  • LlamaIndex: Framework for building LLM applications with external data (Wikipedia).
  • OpenAI: For embedding models and language model (LLM).
  • WikipediaReader: LlamaIndex data reader for Wikipedia content.
  • python-dotenv: Environment variable management.
Go to project
Android

CyberShop β€” Java for Android

CyberShop Screenshot

Cyberpunk-themed business management app

Cybershop is a fictional business management app set in the year 2079. Its purpose is to sell cybernetic prostheses to be displayed in the app through augmented reality or in 3D in the form of STL files, a CAD format, which once purchased can be printed at home using a 3D printer enabled for printing biocompatible implants and electronic circuits.

The project and demo were created for the subject Mobile Programming.

Technologies and Services Used:

Images and texts were generated with Scribble Diffusion and ChatGPT. The demo is functional and responsive.

Repository and presentation available in PDF.

Go to the pdf Go to project
Hack

Hack and Reverse Engineering of a Launchpad MK1

Hack and Reverse Engineering of a Launchpad MK1

LED Control via Python and MIDI

The project stems from the curiosity to explore the limits of the USB and MIDI protocol on a Novation Launchpad MK1.
Initially, through fuzzing with PyUSB and random packets, I managed to turn on the device's LEDs, even though it was not supported on Linux.

Subsequently, I delved into its internal workings: the Launchpad is a 9x9 grid of illuminable keys (3 levels of red and 3 of green, combinable into yellow), which responds to MIDI commands. I developed three Python scripts to describe the matrix, control its LEDs, and send systematic commands, using a wrapper that leverages amidi as a subprocess.

In addition to controlling the lights, the script receives and prints the MIDI events of the pressed keys, transforming the Launchpad into an interactive interface.

In this way I was able to turn the LEDs on, off, and color them as I pleased, without official drivers, even creating visual effects with random packets and instant shutdown sequences.

Go to video Go to project on GitHub Go to LinkedIn update
web

Server-ino β€” Bash script

Server-ino

Mini graphical interface for LAMPP

A tiny graphical interface for LAMPP, written in Bash and tput (mini graphical library). Its purpose is to start a local server without using the command line.

Go to project
AI Agent
Esp32

Johnny The CyberCar Assistant (Thesis)

Johnny The CyberCar Assistant Logo

An AI Agent for automotive, RAG enabled, with voice commands, hosted locally.

Johnny The CyberCar Assistant is a proof of concept (PoC) of a fully local RAG-enabled AI agent grafted onto an Opel Corsa B from 1997, the result of my bachelor's thesis in computer engineering at the University of Catania.

Project Architecture

The project consists of 3 systems:

Description and Functionality

Inspired by KITT from Supercar (aka Knight Rider YouTube Icon) , Johnny makes the car smart: thanks to an ESP32 connected to the CAN BUS and with a 4g modem it enables a voice interface, long-term memory and intelligent control (via OBD-II Port) at the edge.
The rest of the AI system is executed remotely on a server with a consumer GPU (GTX 1080TI).

The assistant is invoked by the keyword "Hey Johnny!", after which it is possible to use voice commands to act locally on the car's devices (like raising the windows) or speak voice-to-voice remotely with the Chatbot, which responds with the context of the car's data (GPS position, speed, temperature, etc.), without ever taking your hands off the steering wheel.
The Qdrant vector database manages the assistant's short/long-term memory, enabling a true RAG (Retrieval-Augmented Generation) experience.

All services are orchestrated with Docker Compose.

Audio Pipeline

🎀 User voice
↓
❓ If the command is local β†’ executes local action βš™οΈ
Otherwise:
↓
🐱 contacts johnny the remote chatbot
↓
🌐 Cloudflared (authentication)
↓
πŸ“ WhisperAI (Voice2Text on GPU)
↓
🧠 LLaMA3 (Ollama, inference on GPU)
↓
πŸ”Š Text2Speech (vocal response)
↓
❓ Loop: until it detects β€œstop”, returns to johnny 🐱

Examples of use

N.B. The project is completely open-source, it was created in about 2 and a half months, it is designed for hardware reuse and edge computing in the automotive sector.
architecture, images, videos and full thesis in pdf (ITA ONLY) are available in the repository.

Components developed for the project

Technologies used

  • Cheshire Cat AI Framework β€” modular platform for AI agents, based on LangChain for reasoning management and FastAPI for API exposure.
  • Ollama β€” local AI backend for executing LLM models, integrated with the framework.
  • Large Language Models (LLM) β€” executed via Ollama on a NVIDIA GTX 1080 Ti GPU.
  • Qdrant β€” vector database used for Retrieval-Augmented Generation (RAG).
  • Whisper AI β€” local voice recognition executed on GPU.
  • Docker & Docker Compose β€” containerization and service orchestration.
  • Cloudflared β€” secure tunnel to expose services in edge computing.
  • ESP32 + Arduino β€” microcontrollers used as interface hardware.
  • CAN BUS and OBD-II β€” protocols used for communication with vehicle systems.


web

Retro Museum

Retro Museum

Fake retro gaming-themed e-commerce in Laravel (PHP)

Retromuseum is a university web development project for a fake e-commerce of a fictional company from Catania, where users could buy used games and consoles, with a visual style heavily inspired by classic 8-bit graphics.
Initially the backend was implemented in pure PHP, then migrated to Laravel.

It uses a pre-populated MySQL database, calls the Spotify API (OAuth 2.0) for the latest (fake) podcast available, and uses MongoDB as a database for the cart (only to get extra points for the project).

It includes a login system, with client/server validation, a search bar and dynamically generated content via JS from rawg.io api.

The entire web app is mobile-responsive.

Go to project

Cloud and On Premise Solution Projects


cloud gaming

Cloud Gaming Bare Metal

Cloud Gaming Bare Metal

Debian + SNES Emulator + Sunshine

A simple local cloud gaming solution: on an old laptop it is installed Debian, an SNES emulator, and the Sunshine server. This way it's possible to stream retro games directly on the network and connect remotely with Moonlight client, without containers or virtualization.

Technologies used:

  • Sunshine
  • Moonlight
  • Cloudflared Tunnel
  • Bsnes
  • Raspberry
  • an old PC as a server

N.B. Minimal and zero-cost configuration, designed for hardware recycling.

Go to the linkedin post
local AI

Local Chatbot with Docker

Local Chatbot with Docker

Ollama + OpenWebUI + Cloudflare Tunnel

A private local chatbot project, running on an old laptop with Docker. The model (Gemma2 or LLaMA 3.2) runs in an Ollama container with a web interface via OpenWebUI, accessible on port 3000.

It's immediately available on the LAN, while a Raspberry Pi with Cloudflare Tunnel securely exposes the service even remotely, via a personalized domain (.xyz) and authentication.

Technologies used:

  • Ollama: Local-hosted AI backend.
  • LLama3.2 / Gemma2: Open-source models.
  • OpenWebUI: Used as the frontend.
  • Docker & NVIDIA Container Toolkit.
  • Raspberry Pi: Used as a tunnel and access point.
  • Old PC: With a GPU or multi-core CPU and at least 4GB of RAM.
  • Cloudflare Tunnel: For secure remote access.

N.B. Configuration designed to recycle hardware and avoid e-waste.

Go to the Local Chatbot linkedin post