Membuat ChatGPT versi Server Local tanpa register dan koneksi internet

By | May 19, 2024
Print Friendly, PDF & Email
481 Views

Membuat ChatGPT versi Server Local – kemampuan chatgpt saat ini masih terus berkembang dengan kemampuannya yang multimodal sehingga banyak yang mulai bergeser pekerjaan seperti edit video, alih suara, translator bahkan progammer sendiri pun akan sedikit tergeser. Nah pada blog ini tidak akan membahas kemampuan chatGPT namun kita akan menjalankan chatgpt dari Lamma di server/komputer sehingga bisa kita gunakan tanpa perlu koneksi internet.

Model bahasa LLaMA (Large Language Model Meta AI) adalah model bahasa besar yang dikembangkan oleh Meta (sebelumnya Facebook). Model ini dirancang untuk berbagai tugas pemrosesan bahasa alami (Natural Language Processing/NLP), seperti menghasilkan teks.

Ollama, Apa itu?: Ollama adalah sebuah aplikasi yang mampu menjalankan LLM menggunakan CPU maupun GPU secara lokal, dimana dalam penerapan secara nyata, LLM di deploy menggunakan GPU / TPU dan menggunakan banyak mesin sekaligus. Kalian bisa mengunjungi https://ollama.com/download

Cara install Ollama

Untuk install ollama sebenarnya sangat mudah, namun saya lebih suka menggunakan linux ubuntu untuk uji cobanya. Pastikan kalian sudah familiar dengan terminal. Langkah pertama kita install dulu via terminal

curl https://ollama.ai/install.sh | sh

Setelah hal tersebut, kita aktivkan server ollama dengan perintah berikut

ollama serve

Kemudian kita akan download model bahasa, kalian buka tab terminal lagi dengan mengetikan model bahasa berikut (ukuranya cukup besar sekitar 3.8 GB) llama2:7b-chat-q4_0

ollama pull llama2:7b-chat-q4_0

setelah selesai download, mari kita coba, Oiya kalian bisa kok menggunakan bahasa indonesia yang nanti ollama melakukan translate otomatis kok. Mari kita buktikan dengan mengetikan perintah berikut

ollama run llama2:7b-chat-q4_0 "jelaskan mengenai perusahaan emiten BBRI"

Berikut respond chatgpt nya

 

Kita bisa melihat CPU nya 100% ketika sedang melakukan generate respond.

See also  Deteksi dan segmentasi otomatis sel dan inti sel dalam gambar mikroskopi

 

Model diatas juga pernah saya coba untuk membuat kode seperti perintah membuat training dari model VGG16 dengan pytorch.

bejo@bejo-hp:~$ ollama run llama2:7b-chat-q4_0 "tolong buatkan training model VGG16 dengan pytorch"

Untuk membuat training model VGG16 menggunakan PyTorch, Anda perlu melakukan beberapa langkah berikut:

1. Instalasi PyTorch:

Most of the time, PyTorch comes pre-installed with popular deep learning frameworks such as TensorFlow and Keras. However, you can also install it manually using pip:
```
pip install torch
```
2. Importing necessary modules:

You will need to import the following modules:
```python
import torch
import torchvision
import torchvision.transforms as transforms
```
3. Loading the VGG16 model:

You can use the `torchvision.load_model()` function to load the pre-trained VGG16 model. Here's an example:
```python
model = torchvision.load_model('vgg16.pth')
```
This will load the pre-trained VGG16 model and store it in a PyTorch tensor.
4. Creating a dataset class:

You can use the `torch.utils.data.Dataset` class to define a custom dataset class that loads your training data. Here's an example:
```python
class MyDataset(torch.utils.data.Dataset):
    def __init__(self, train_data, train_labels):
        self.train_data = train_data
        self.train_labels = train_labels
    
    def __getitem__(self, index):
        image = self.train_data[index]
        label = self.train_labels[index]
        return image, label
    
    def __len__(self):
        return len(self.train_data)
```
This class takes in the training data and labels as arguments in the constructor, and then defines the `__getitem__` method to fetch the data and labels for a given index. The 
`__len__` method returns the length of the training data.
5. Creating a PyTorch DataLoader:

You can use the `torch.utils.data.DataLoader` class to create a data loader that loads the training data in batches. Here's an example:
```python
train_loader = torch.utils.data.DataLoader(MyDataset(train_data, train_labels), batch_size=32, shuffle=True)
```
This creates a data loader that loads the training data in batches of 32 samples each, and shuffles the order of the samples each time it is used.
6. Defining the loss function and optimizer:

You can use the `nn.CrossEntropyLoss()` function to define the loss function for your model, and the `torch.optim.AdamOptimizer()` class to define an Adam optimizer with learning rate 
0.001. Here's an example:
```python
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.AdamOptimizer(model.parameters(), lr=0.001)
```
7. Training the model:

You can use the `train()` method of the `DataLoader` class to train the model on the training data. Here's an example:
```python
for epoch in range(num_epochs):
    for batch in train_loader:
        images, labels = batch
        optimizer.zero_grad()
        outputs = model(images)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
```
This trains the model on the training data for a specified number of epochs, using the Adam optimizer with a learning rate of 0.001 to update the model's parameters.
8. Evaluating the model:

You can use the `evaluate()` method of the `DataLoader` class to evaluate the model on the validation data. Here's an example:
```python
val_loader = torch.utils.data.DataLoader(MyDataset(train_data, train_labels), batch_size=32, shuffle=False)
val_loss = 0
for batch in val_loader:
    images, labels = batch
    outputs = model(images)
    loss = criterion(outputs, labels)
    val_loss += loss.item()
val_loss /= len(val_loader)
print('Val loss:', val_loss)
```
This evaluates the model on the validation data and calculates the average loss over the entire validation set.
9. Saving the trained model:

You can use the `torch.save()` function to save the trained model to a file. Here's an example:
```python
torch.save(model.state_dict(), 'vgg16_trained.pth')
```
This saves the trained model to a file named `vgg16_trained.pth`.

These are the basic steps involved in training a VGG16 model using PyTorch. You can modify these steps as needed for your specific use case.

bejo@bejo-hp:~$ 

sangat lumayan sekali

See also  Akhirnya Torch versi R telah tersedia - Framework Machine Learning

 

Selain itu

Kalian bisa kok melakukan tugas ke ollama buat jelasin gambar menggunakan model vision https://ollama.com/blog/vision-models yaitu Large Language-and-Vision Assistant. Ada 3 model yang bisa kalian gunakan (jumlahnya semakin besar semakin lengkap)

  • ollama run llava:7b
  • ollama run llava:13b
  • ollama run llava:34b

Misalkan untuk menjelaskan gambar berikut

berikut perintah dan respond atas model tersebut

% ollama run llava "tell me what do you see in this picture? ./pic.jpg"
In the image, a man wearing blue and white is holding video game controllers in his hands while smiling. His arms are raised above his head as he plays with the virtual reality gaming devices. The man appears to be enjoying himself and immersed in his gaming experience.