Skip to main content

IA

image.png


Instalar paquetes

sudo apt install lshw curl
 lshw -C display
  *-display                 
       description: VGA compatible controller
       product: GM107GL [Quadro K2200]
       vendor: NVIDIA Corporation
       physical id: 0
       bus info: pci@0000:02:00.0
       logical name: /dev/fb0
       version: a2
       width: 64 bits
       clock: 33MHz
       capabilities: pm msi pciexpress vga_controller bus_master cap_list rom fb
       configuration: depth=32 driver=nouveau latency=0 resolution=1920,1080
       resources: irq:38 memory:f2000000-f2ffffff memory:e0000000-efffffff memory:f0000000-f1ffffff ioport:2000(size=128) memory:c0000-dffff

Instalar Driver nvidiaRequisitos

CUDAUbuntu Installation24.04 Guideo forsuperior Linuxinstalado

tu
GPU
AñadeNVIDIA).

al final
  • Open WebUI → interfaz web tipo ChatGPT con historial persistente, usuarios, adjuntos y memoria/Knowledge Base.

  • (Opcional) ChromaDB → memoria vectorial para que “aprenda” de cadatus líneachats del fichero '/etc/apt/sources.list' -> 'contrib non-free'


  • deb http://deb.debian.org/debian/ bookworm main non-free-firmware contrib non-free
    deb http://security.debian.org/debian-security bookworm-security main non-free-firmware contrib non-free
    deb http://deb.debian.org/debian/ bookworm-updates main non-free-firmware contrib non-free
    deb http://deb.debian.org/debian/ bookworm-backports main non-free-firmware contrib non-free
    Instala los paquetes
    LINUX_HEADERS=$(uname -r)
    sudo apt update
    sudo apt -y install nvidia-driver firmware-misc-nonfree linux-headers-$LINUX_HEADERS dkms
    sudo reboot
    Comprueba la instalación que mostrará el estado de la tarjeta gráfica:
    nvidia-smi
    
    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 525.105.17   Driver Version: 525.105.17   CUDA Version: 12.0     |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |                               |                      |               MIG M. |
    |===============================+======================+======================|
    |   0  NVIDIA GeForce ...  On   | 00000000:05:00.0 Off |                  N/A |
    |  0%   54C    P0    27W / 120W |      1MiB /  6144MiB |      0%      Default |
    |                               |                      |                  N/A |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                                  |
    |  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
    |        ID   ID                                                   Usage      |
    |=============================================================================|
    |  No running processes found                                                 |
    +-----------------------------------------------------------------------------+

    Instalar NVIDIA Container Toolkit

    Debian 12 Bookworm : NVIDIA Container Toolkit : Install : Server Worlddocumentos (server-world.info)RAG).

    Installing the NVIDIA Container Toolkit — NVIDIA Container Toolkit 1.16.2 documentation

    Instalar OLLAMA

    ollama/ollama

    curl -fsSL Docker Imagehttps://ollama.com/install.sh | Dockersh
    
    Hub# Arranca el servicio (si no arrancó solo):
    sudo systemctl enable --now ollama


    Ejecuta el contenedor usando CPU solo

     docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama docker.io/ollama/ollama

    Ejecuta el contenedor usando gpu

    docker run -d --gpus=all -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
    Ejecuta modelo

    Now you can run a model:

    docker exec -it ollama ollama run llama3

    Open WEBUI

    🏡 Home | Open WebUI

    Installing Open WebUI with Bundled Ollama Support

    This installation method uses a single container image that bundles Open WebUI with Ollama, allowing for a streamlined setup via a single command. Choose the appropriate command based on your hardware setup:

    • With GPU Support: Utilize GPU resources by running the following command:

      sudo podman run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
    • For CPU Only: If you're not using a GPU, use this command instead:

      podman run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama

    Both commands facilitate a built-in, hassle-free installation of both Open WebUI and Ollama, ensuring that you can get everything up and running swiftly.

    After installation, you can access Open WebUI at http://localhost:3000. Enjoy! 😄