Sample AI implementation in JAVA

 Below is a simple example of an AI-related concept implemented in Java: a basic perceptron, which is a building block of neural networks. This code demonstrates how an AI model can learn to classify data points (in this case, a binary classification problem) using supervised learning. I’ll keep it beginner-friendly and explain each part.


Sample Code: Single Perceptron in Java
Problem
We’ll train a perceptron to act as a logical AND gate. Given two binary inputs (0 or 1), it outputs 1 only if both inputs are 1:
  • 0, 0 → 0
  • 0, 1 → 0
  • 1, 0 → 0
  • 1, 1 → 1
Code
java
public class Perceptron {
    // Weights and bias for the perceptron
    private double[] weights;
    private double bias;
    private double learningRate;

    // Constructor
    public Perceptron(int inputSize, double learningRate) {
        this.weights = new double[inputSize];
        this.learningRate = learningRate;
        this.bias = 0.0;

        // Initialize weights randomly between -1 and 1
        for (int i = 0; i < inputSize; i++) {
            weights[i] = Math.random() * 2 - 1; // Random value between -1 and 1
        }
    }

    // Activation function: simple step function (returns 1 if sum > 0, else 0)
    private int activate(double sum) {
        return sum > 0 ? 1 : 0;
    }

    // Forward pass: compute the output for given inputs
    public int predict(int[] inputs) {
        double sum = bias;
        for (int i = 0; i < inputs.length; i++) {
            sum += inputs[i] * weights[i];
        }
        return activate(sum);
    }

    // Train the perceptron with labeled data
    public void train(int[][] trainingData, int[] labels, int epochs) {
        for (int epoch = 0; epoch < epochs; epoch++) {
            for (int i = 0; i < trainingData.length; i++) {
                int[] inputs = trainingData[i];
                int expected = labels[i];
                int prediction = predict(inputs);

                // Calculate error
                int error = expected - prediction;

                // Update weights and bias using the perceptron learning rule
                for (int j = 0; j < weights.length; j++) {
                    weights[j] += learningRate * error * inputs[j];
                }
                bias += learningRate * error;
            }
        }
    }

    public static void main(String[] args) {
        // Training data for AND gate
        int[][] trainingData = {
            {0, 0},
            {0, 1},
            {1, 0},
            {1, 1}
        };
        int[] labels = {0, 0, 0, 1}; // Expected outputs for AND

        // Create perceptron with 2 inputs and a learning rate of 0.1
        Perceptron perceptron = new Perceptron(2, 0.1);

        // Train for 10 epochs
        perceptron.train(trainingData, labels, 10);

        // Test the perceptron
        System.out.println("Testing the trained perceptron:");
        for (int[] inputs : trainingData) {
            int output = perceptron.predict(inputs);
            System.out.printf("Input: [%d, %d] -> Output: %d%n", 
                inputs[0], inputs[1], output);
        }
    }
}

Explanation
  1. Perceptron Class
    • A perceptron is a simple neural unit with weights, a bias, and an activation function.
    • weights: Multiplied by inputs to determine their importance.
    • bias: Shifts the decision boundary.
    • learningRate: Controls how fast the model learns (too high = unstable, too low = slow).
  2. Key Methods
    • predict: Computes the weighted sum of inputs and bias, then applies the activation function (step function here) to output 0 or 1.
    • train: Adjusts weights and bias based on errors (difference between expected and predicted outputs). This is the "learning" part.
    • activate: A simple threshold—if the sum is positive, output 1; otherwise, 0.
  3. Training Data
    • trainingData: 4 input pairs representing all combinations of two binary inputs.
    • labels: The correct outputs for an AND gate.
  4. Main Method
    • Creates a perceptron, trains it on the AND gate data for 10 epochs (iterations), then tests it by predicting outputs for each input pair.
Sample Output
Testing the trained perceptron:
Input: [0, 0] -> Output: 0
Input: [0, 1] -> Output: 0
Input: [1, 0] -> Output: 0
Input: [1, 1] -> Output: 1
If the training works, the perceptron learns to mimic the AND gate perfectly.

How This Relates to AI
  • This perceptron is a tiny piece of AI. Real neural networks stack many perceptrons into layers (like the CNN I described earlier) to solve complex problems.
  • The learning process (adjusting weights based on errors) is the core of supervised learning, a key AI technique.
Extending This
  • Add more inputs or layers to handle complex problems (e.g., XOR gate needs a multi-layer perceptron).
  • Use libraries like DeepLearning4j for real-world Java AI projects (this example is from scratch for learning).
  • Swap the step function for something like sigmoid for smoother outputs.
Want to modify this (e.g., for a different logic gate like OR) or see a more advanced example?

Comments

Popular posts from this blog

Prediction model using Python

Basics of Artificial Intelligence

AI Architecture