121 Views

Yup, kalau kalian pernah menggunakan matlab untuk melakukan simulasi algoritma jaringan syaraf tiruan backpropagation. Package yang berfungsi seperti itu ada koq di octave yaitu package nnet https://octave.sourceforge.io/nnet/overview.html. Seperti biasa untuk melakukan install secara online ketikan perintah berikut

pkg install -forge nnet

Kalian pastikan terlebih dahulu ya, harus ada koneksi internet, jika sudah berhasil install, cek saja

pkg list

Hasilnya package nnet octave sudah terinstall dengan baik

nnet | 0.1.13 | C:\Octave\OCTAVE~1.0\mingw64\share\octave\packages\nnet-0.1.13

Langsung saja yuk, untuk loading package octave nya

pkg load nnet

Untuk membuktikan bahwa sudah terloading dengan baik, kita coba saja lihat dokumentasi sebuah function newff (neural network feed forward) help newff maka hasilnya akan tampil sebagai  berikut

>> help newff
'newff' is a function from the file C:\Octave\OCTAVE~1.0\mingw64\share\octave\packages\nnet-0.1.13\newff.m

 -- Function File: NET = newff (PR,SS,TRF,BTF,BLF,PF)
     'newff' create a feed-forward backpropagation network

          Pr - R x 2 matrix of min and max values for R input elements
          Ss - 1 x Ni row vector with size of ith layer, for N layers
          trf - 1 x Ni list with transfer function of ith layer,
                default = "tansig"
          btf - Batch network training function,
                default = "trainlm"
          blf - Batch weight/bias learning function,
                default = "learngdm"
          pf  - Performance function,
                default = "mse".

          EXAMPLE 1
          Pr = [0.1 0.8; 0.1 0.75; 0.01 0.8];
               it's a 3 x 2 matrix, this means 3 input neurons

          net = newff(Pr, [4 1], {"tansig","purelin"}, "trainlm", "learngdm", "mse");

Additional help for built-in functions and operators is
available in the online version of the manual.  Use the command
'doc <topic>' to search the manual index.

Help and information about Octave is also available on the WWW
at https://www.octave.org and via the help@octave.org
mailing list.

Selamat! Kalian sudah melakukan install dan loading package dengan baik. Hal yang perlu kalian ketahui yaitu

  1. Package nnet hanya mengimplementasikan algoritma feed forward backprogation neural network.
  2. Algoritma training yaitu Levenberg-Marquardt

Alur

Secara umum alur yang harus kalian lakukan untuk menggunakan package NN yaitu

Pre processing Data

Yaitu mengubah data menjadi standarisasi agar mean bernilai 0 dan standar deviasi nya menjadi 1 jika dataset tersebut tidak mempunyai rentang 0 sampai dengan 1. Kalian bisa baca teknik normalisasi pada link ini https://softscients.com/2020/03/29/buku-pemrograman-matlab-langkah-langkah-normalisasi-data/.

Namun jika data kalian tidak mau diubah menjadi skala 0 sampai dengan 1 atau lebih suka menggunakan function purelin, maka datanya harus diberikan informasi mengenai min_max nya. Perhatikan dataset dibawah ini yang mempunyai 7 records dengan paramater input sejumlah 2 dan target sejumlah 1

Secara umum format dataset yang kita peroleh akan tersusun seperti diatas, namun demikian berbeda dengan NN newff yang harus diubah menjadi susunan perkolom. Kalian bisa melihat nilai min_max nya. Gunakan package io untuk membaca file excel.

Arsitektur Jaringan

Untuk arsitektur, kita bisa menggunakan 1 layer hidden seperti berikut,

Contoh kode yang digunakan yaitu

clc;
warning off;

data = xlsread('logika or.xlsx')
P = data(:,2:3);
T = data(:,end);

%jangan lupa tranpose
P = transpose(P)
T = transpose(T)

P_min_max = min_max(P)

model = newff(P_min_max,[5 1],{'logsig' 'purelin'},'trainlm','learngdm','mse');
model.trainParam.epochs = 500;
model.trainParam.goal = 0.01;
[net] = train(model,P,T); %sesi training
prediksi = round(sim(net,P)) %sesi prediksi

Bila kalian lihat variabel net mempunyai struktur sebagai berikut

>> net
net =

  scalar structure containing the fields:

    networkType = newff
    numInputs =  1
    numLayers =  2
    numInputDelays = 0
    numLayerDelays = 0
    biasConnect =

       0
       0

    inputConnect =

       0
       0

    layerConnect =

       0   0
       0   0

    outputConnect =

       0   0

    targetConnect =

       0   0

    numOutputs =  1
    numTargets =  1
    inputs =
    {
      [1,1] =

        scalar structure containing the fields:

          range =

             2   9
             1   8

          size =  2
          userdata = Put your custom informations here!

    }

    layers =
    {
      [1,1] =

        scalar structure containing the fields:

          dimension = 0
          netInputFcn =
          size =  5
          transferFcn = logsig
          userdata = Put your custom informations here!

      [2,1] =

        scalar structure containing the fields:

          dimension = 0
          netInputFcn =
          size =  1
          transferFcn = purelin
          userdata = Put your custom informations here!

    }

    biases =
    {
      [1,1] =

        scalar structure containing the fields:

          learn =  1
          learnFcn =
          learnParam = undefined...
          size =  5
          userdata = Put your custom informations here!

      [2,1] =

        scalar structure containing the fields:

          learn =  1
          learnFcn =
          learnParam = undefined...
          size =  1
          userdata = Put your custom informations here!

    }

    inputWeights = {}(0x0)
    layerWeights = {}(0x0)
    outputs =
    {
      [1,1] = [](0x0)
      [1,2] =

        scalar structure containing the fields:

          size =  1
          userdata = Put your custom informations here!

    }

    targets =
    {
      [1,1] = [](0x0)
      [1,2] =

        scalar structure containing the fields:

          size =  1
          userdata = Put your custom informations here!

    }

    performFcn = mse
    performParam = [](0x0)
    trainFcn = trainlm
    trainParam =

      scalar structure containing the fields:

        epochs =  500
        goal =  0.010000
        max_fail =  5
        mem_reduc =  1
        min_grad =  0.00000000010000
        mu =  0.0010000
        mu_dec =  0.10000
        mu_inc =  10
        mu_max =  10000000000
        show =  50
        time =  Inf

    IW =
    {
      [1,1] =

        -0.55684  -0.70921
         0.75063   0.84642
         1.06797   0.50958
        -0.35376   0.78137
        -0.48356   0.11495

      [2,1] = [](0x0)
    }

    LW =
    {
      [1,1] = [](0x0)
      [2,1] =

         2.09062   0.36432  -0.22793   1.86363  -3.06470

    }

    b =
    {
      [1,1] =

         1.51604
         0.45098
        -0.24536
        -0.17562
         2.81306

      [2,1] =  0.73663
    }

    userdata =

      scalar structure containing the fields:

        note = Put your custom network information here.


>>

Berikut hasil runing kode diatas

data =

   1   8   5   1
   2   7   6   1
   3   9   4   1
   4   6   3   0
   5   3   7   0
   6   2   1   0
   7   4   8   0

P =

   8   7   9   6   3   2   4
   5   6   4   3   7   1   8

T =

   1   1   1   0   0   0   0

P_min_max =

   2   9
   1   8

TRAINLM, Epoch 0/500, MSE 0.451969/0.01, Gradient 5.68124/1e-10
TRAINLM, Epoch 46/500, MSE 0.00072238/0.01, Gradient 0.755967/1e-10
TRAINLM, Performance goal met.

prediksi =

   1   1   1   0   0   0   0

>>

Bugs

Kalau kalian temukan error, karena  ada fuction finite() yang telah deprecated menjadi isfinite() yaitu pada kode function logsig.m dan tansig.m

Leave a Reply

Your email address will not be published. Required fields are marked *

1 + 9 =