TheAlgorithms-C-Plus-Plus/machine_learning/adaline_learning.cpp

352 lines
12 KiB
C++
Raw Normal View History

Major rework to improve code quality and add automation checks (#805) * delete secant method - it is identical to regula falsi * document + improvize root finding algorithms * attempt to document gaussian elimination * added file brief * commented doxygen-mainpage, added files-list link * corrected files list link path * files-list link correction - this time works :) * document successive approximations * cleaner equation * updating DIRECTORY.md * documented kmp string search * document brute force string search * document rabin-karp string search * fixed mainpage readme * doxygen v1.8.18 will suppress out the #minipage in the markdown * cpplint correction for header guard style * github action to auto format source code per cpplint standard * updated setting to add 1 space before `private` and `public` keywords * auto rename files and auto format code * added missing "run" for step * corrected asignmemt operation * fixed trim and assign syntax * added git move for renaming bad filenames * added missing pipe for trim * added missing space * use old and new fnames * store old fname using echo * move files only if there is a change in filename * put old filenames in quotes * use double quote for old filename * escape double quotes * remove old_fname * try escape characters and echo" * add file-type to find * cleanup echo * ensure all trim variables are also in quotes * try escape -quote again * remove second escpe quote * use single quote for first check * use carets instead of quotes * put variables in brackets * remove -e from echo * add debug echos * try print0 flag * find command with while instead of for-loop * find command using IFS instead * :tada: IFS fix worked - escaped quotes for git mv * protetc each word in git mv .. * filename exists in lower cases - renamed * :tada: git push enabled * updating DIRECTORY.md * git pull & then push * formatting filenames d7af6fdc8cb08578de6980d412e6e1caca1a1bcf * formatting source-code for d7af6fdc8cb08578de6980d412e6e1caca1a1bcf * remove allman break before braces * updating DIRECTORY.md * added missing comma lost in previous commit * orchestrate all workflows * fix yml indentation * force push format changes, add title to DIRECTORY.md * pull before proceeding * reorganize pull commands * use master branches for actions * rename .cc files to .cpp * added class destructor to clean up dynamic memory allocation * rename to awesome workflow * commented whole repo cpplint - added modified files lint check * removed need for cpplint * attempt to use actions/checkout@master * temporary: no dependency on cpplint * formatting filenames 153fb7b8a572aaf4561ac3d22d47e89480f11318 * formatting source-code for 153fb7b8a572aaf4561ac3d22d47e89480f11318 * updating DIRECTORY.md * fix diff filename * added comments to the code * added test case * formatting source-code for a850308fbada18c0d4b6f9a9cac5c34fc064cbae * updating DIRECTORY.md * added machine learning folder * added adaline algorithm * updating DIRECTORY.md * fixed issue [LWG2192](https://cplusplus.github.io/LWG/issue2192) for std::abs on MacOS * add cmath for same bug: [LWG2192](https://cplusplus.github.io/LWG/issue2192) for std::abs on MacOS * formatting source-code for f8925e482216aecd152bc898653ee9ab82213cf3 * use STL's inner_product * formatting source-code for f94a3305943d4cf00e4531857279b8032d0e9489 * added range comments * define activation function * use equal initial weights * change test2 function to predict * activation function not friend * previous commit correction * added option for predict function to return value before applying activation function as optional argument * added test case to classify points lying within a sphere * improve documentation for adaline * formatting source-code for 15ec4c3aba4fb41b81ed2b44b7154a4f7b45a898 * added cmake to geometry folder * added algorithm include for std::max * add namespace - machine_learning * add namespace - statistics * add namespace - sorting * added sorting algos to namespace sorting * added namespace string_search * formatting source-code for fd695305150777981dc2a1f256aa2be444e4f108 * added documentation to string_search namespace * feat: Add BFS and DFS algorithms to check for cycle in a directed graph * Remove const references for input of simple types Reason: overhead on access * fix bad code sorry for force push * Use pointer instead of the non-const reference because apparently google says so. * Remove a useless and possibly bad Graph constuctor overload * Explicitely specify type of vector during graph instantiation * updating DIRECTORY.md * find openMP before adding subdirectories * added kohonen self organizing map * updating DIRECTORY.md * remove older files and folders from gh-pages before adding new files * remove chronos library due to inacceptability by cpplint * use c++ specific static_cast instead * initialize radom number generator * updated image links with those from CPP repository * rename computer.... folder to numerical methods * added durand kerner method for root computation for arbitrarily large polynomials * fixed additional comma * fix cpplint errors * updating DIRECTORY.md * convert to function module * update documentation * move openmp to main loop * added two test cases * use INT16_MAX * remove return statement from omp-for loop and use "break" * run tests when no input is provided and skip tests when input polynomial is provided * while loop cannot have break - replaced with continue and check is present in the main while condition * (1) break while loop (2) skip runs on break_loop instead of hard-break * add documentation images * use long double for errors and tolerance checks * make iterator variable i local to threads * add critical secions to omp threads * bugfix: move file writing outside of the parallel loop othersie, there is no gurantee of the order of roots written to file * rename folder to data_structures * updating DIRECTORY.md * fix ambiguous symbol `size` * add data_structures to cmake * docs: enable tree view, add timestamp in footer, try clang assistaed parsing * doxygen - open links in external window * remove invalid parameter from function docs * use HTML5 img tag to resize images * move file to proper folder * fix documentations and cpplint * formatting source-code for aacaf9828c61bb0246fe0933ab8ade82128b8346 * updating DIRECTORY.md * cpplint: add braces for multiple statement if * add explicit link to badges * remove duplicate line Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * remove namespace indentation * remove file associations in settings * add author name * enable cmake in subfolders of data_structures * create and link object file * cpp lint fixes and instantiate template classes * cpp lint fixes and instantiate template classes Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * cpplint - ignore `build/include` Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * disable redundant gcc compilation in cpplint workflow Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * template header files contain function codes as well and removed redundant subfolders Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * updating DIRECTORY.md * remove semicolons after functions in a class Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * cpplint header guard style Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * remove semilon Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * added LU decomposition algorithm Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * added QR decomposition algorithm Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * use QR decomposition to find eigen values Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * updating DIRECTORY.md * use std::rand for thread safety Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * move srand to main() Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * cpplint braces correction Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * updated eigen value documentation Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * fix matrix shift doc Signed-off-by: Krishna Vedala <7001608+kvedala@users.noreply.github.com> * rename CONTRIBUTION.md to CONTRIBUTING.md #836 * remove 'sort alphabetical order' check * added documentation check * remove extra paranthesis * added gitpod * added gitpod link from README * attempt to add vscode gitpod extensions * update gitpod extensions * add gitpod extensions cmake-tools and git-graph * remove gitpod init and add commands * use init to one time install doxygen, graphviz, cpplint * use gitpod dockerfile * add ninja build system to docker * remove configure task * add github prebuild specs to gitpod * disable gitpod addcommit * update documentation for kohonen_som * added ode solve using forward euler method * added mid-point euler ode solver * fixed itegration step equation * added semi-implicit euler ODE solver * updating DIRECTORY.md * fix cpplint issues - lines 117 and 124 * added documentation to ode group * corrected semi-implicit euler function * updated docs and test cases better structure * replace `free` with `delete` operator * formatting source-code for f55ab50cf26d176fe56bdaffa6f0ce8023c03c18 * updating DIRECTORY.md * main function must return * added machine learning group * added kohonen som topology algorithm * fix graph image path * updating DIRECTORY.md * fix braces * use snprintf instead of sprintf * use static_cast * hardcode character buffer size * fix machine learning groups in documentation * fix missing namespace function * replace kvedala fork references to TheAlgorithms * fix bug in counting_sort Co-authored-by: github-actions <${GITHUB_ACTOR}@users.noreply.github.com> Co-authored-by: Anmol3299 <mittalanmol22@gmail.com>
2020-06-20 00:04:56 +08:00
/**
* \addtogroup machine_learning Machine Learning Algorithms
* @{
* \file
* \brief [Adaptive Linear Neuron
* (ADALINE)](https://en.wikipedia.org/wiki/ADALINE) implementation
*
* \author [Krishna Vedala](https://github.com/kvedala)
*
* <img
* src="https://upload.wikimedia.org/wikipedia/commons/b/be/Adaline_flow_chart.gif"
* width="200px">
* [source](https://commons.wikimedia.org/wiki/File:Adaline_flow_chart.gif)
* ADALINE is one of the first and simplest single layer artificial neural
* network. The algorithm essentially implements a linear function
* \f[ f\left(x_0,x_1,x_2,\ldots\right) =
* \sum_j x_jw_j+\theta
* \f]
* where \f$x_j\f$ are the input features of a sample, \f$w_j\f$ are the
* coefficients of the linear function and \f$\theta\f$ is a constant. If we
* know the \f$w_j\f$, then for any given set of features, \f$y\f$ can be
* computed. Computing the \f$w_j\f$ is a supervised learning algorithm wherein
* a set of features and their corresponding outputs are given and weights are
* computed using stochastic gradient descent method.
*/
#include <cassert>
#include <climits>
#include <cmath>
#include <cstdlib>
#include <ctime>
#include <iostream>
#include <numeric>
#include <vector>
#define MAX_ITER 500 // INT_MAX ///< Maximum number of iterations to learn
/** \namespace machine_learning
* \brief Machine learning algorithms
*/
namespace machine_learning {
class adaline {
public:
/**
* Default constructor
* \param[in] num_features number of features present
* \param[in] eta learning rate (optional, default=0.1)
* \param[in] convergence accuracy (optional,
* default=\f$1\times10^{-5}\f$)
*/
adaline(int num_features, const double eta = 0.01f,
const double accuracy = 1e-5)
: eta(eta), accuracy(accuracy) {
if (eta <= 0) {
std::cerr << "learning rate should be positive and nonzero"
<< std::endl;
std::exit(EXIT_FAILURE);
}
weights = std::vector<double>(
num_features +
1); // additional weight is for the constant bias term
// initialize with random weights in the range [-50, 49]
for (int i = 0; i < weights.size(); i++) weights[i] = 1.f;
// weights[i] = (static_cast<double>(std::rand() % 100) - 50);
}
/**
* Operator to print the weights of the model
*/
friend std::ostream &operator<<(std::ostream &out, const adaline &ada) {
out << "<";
for (int i = 0; i < ada.weights.size(); i++) {
out << ada.weights[i];
if (i < ada.weights.size() - 1)
out << ", ";
}
out << ">";
return out;
}
/**
* predict the output of the model for given set of features
* \param[in] x input vector
* \param[out] out optional argument to return neuron output before
* applying activation function (optional, `nullptr` to ignore) \returns
* model prediction output
*/
int predict(const std::vector<double> &x, double *out = nullptr) {
if (!check_size_match(x))
return 0;
double y = weights.back(); // assign bias value
// for (int i = 0; i < x.size(); i++) y += x[i] * weights[i];
y = std::inner_product(x.begin(), x.end(), weights.begin(), y);
if (out != nullptr) // if out variable is provided
*out = y;
return activation(y); // quantizer: apply ADALINE threshold function
}
/**
* Update the weights of the model using supervised learning for one
* feature vector \param[in] x feature vector \param[in] y known output
* value \returns correction factor
*/
double fit(const std::vector<double> &x, const int &y) {
if (!check_size_match(x))
return 0;
/* output of the model with current weights */
int p = predict(x);
int prediction_error = y - p; // error in estimation
double correction_factor = eta * prediction_error;
/* update each weight, the last weight is the bias term */
for (int i = 0; i < x.size(); i++) {
weights[i] += correction_factor * x[i];
}
weights[x.size()] += correction_factor; // update bias
return correction_factor;
}
/**
* Update the weights of the model using supervised learning for an
* array of vectors. \param[in] X array of feature vector \param[in] y
* known output value for each feature vector
*/
template <int N>
void fit(std::vector<double> const (&X)[N], const int *y) {
double avg_pred_error = 1.f;
int iter;
for (iter = 0; (iter < MAX_ITER) && (avg_pred_error > accuracy);
iter++) {
avg_pred_error = 0.f;
// perform fit for each sample
for (int i = 0; i < N; i++) {
double err = fit(X[i], y[i]);
avg_pred_error += std::abs(err);
}
avg_pred_error /= N;
// Print updates every 200th iteration
// if (iter % 100 == 0)
std::cout << "\tIter " << iter << ": Training weights: " << *this
<< "\tAvg error: " << avg_pred_error << std::endl;
}
if (iter < MAX_ITER)
std::cout << "Converged after " << iter << " iterations."
<< std::endl;
else
std::cout << "Did not converge after " << iter << " iterations."
<< std::endl;
}
int activation(double x) { return x > 0 ? 1 : -1; }
private:
/**
* convenient function to check if input feature vector size matches the
* model weights size
* \param[in] x fecture vector to check
* \returns `true` size matches
* \returns `false` size does not match
*/
bool check_size_match(const std::vector<double> &x) {
if (x.size() != (weights.size() - 1)) {
std::cerr << __func__ << ": "
<< "Number of features in x does not match the feature "
"dimension in model!"
<< std::endl;
return false;
}
return true;
}
const double eta; ///< learning rate of the algorithm
const double accuracy; ///< model fit convergence accuracy
std::vector<double> weights; ///< weights of the neural network
};
} // namespace machine_learning
using machine_learning::adaline;
/** @} */
/**
* test function to predict points in a 2D coordinate system above the line
* \f$x=y\f$ as +1 and others as -1.
* Note that each point is defined by 2 values or 2 features.
* \param[in] eta learning rate (optional, default=0.01)
*/
void test1(double eta = 0.01) {
adaline ada(2, eta); // 2 features
const int N = 10; // number of sample points
std::vector<double> X[N] = {{0, 1}, {1, -2}, {2, 3}, {3, -1},
{4, 1}, {6, -5}, {-7, -3}, {-8, 5},
{-9, 2}, {-10, -15}};
int y[] = {1, -1, 1, -1, -1, -1, 1, 1, 1, -1}; // corresponding y-values
std::cout << "------- Test 1 -------" << std::endl;
std::cout << "Model before fit: " << ada << std::endl;
ada.fit(X, y);
std::cout << "Model after fit: " << ada << std::endl;
int predict = ada.predict({5, -3});
std::cout << "Predict for x=(5,-3): " << predict;
assert(predict == -1);
std::cout << " ...passed" << std::endl;
predict = ada.predict({5, 8});
std::cout << "Predict for x=(5,8): " << predict;
assert(predict == 1);
std::cout << " ...passed" << std::endl;
}
/**
* test function to predict points in a 2D coordinate system above the line
* \f$x+3y=-1\f$ as +1 and others as -1.
* Note that each point is defined by 2 values or 2 features.
* The function will create random sample points for training and test purposes.
* \param[in] eta learning rate (optional, default=0.01)
*/
void test2(double eta = 0.01) {
adaline ada(2, eta); // 2 features
const int N = 50; // number of sample points
std::vector<double> X[N];
int Y[N]; // corresponding y-values
// generate sample points in the interval
// [-range2/100 , (range2-1)/100]
int range = 500; // sample points full-range
int range2 = range >> 1; // sample points half-range
for (int i = 0; i < N; i++) {
double x0 = ((std::rand() % range) - range2) / 100.f;
double x1 = ((std::rand() % range) - range2) / 100.f;
X[i] = {x0, x1};
Y[i] = (x0 + 3. * x1) > -1 ? 1 : -1;
}
std::cout << "------- Test 2 -------" << std::endl;
std::cout << "Model before fit: " << ada << std::endl;
ada.fit(X, Y);
std::cout << "Model after fit: " << ada << std::endl;
int N_test_cases = 5;
for (int i = 0; i < N_test_cases; i++) {
double x0 = ((std::rand() % range) - range2) / 100.f;
double x1 = ((std::rand() % range) - range2) / 100.f;
int predict = ada.predict({x0, x1});
std::cout << "Predict for x=(" << x0 << "," << x1 << "): " << predict;
int expected_val = (x0 + 3. * x1) > -1 ? 1 : -1;
assert(predict == expected_val);
std::cout << " ...passed" << std::endl;
}
}
/**
* test function to predict points in a 3D coordinate system lying within the
* sphere of radius 1 and centre at origin as +1 and others as -1. Note that
* each point is defined by 3 values but we use 6 features. The function will
* create random sample points for training and test purposes.
* The sphere centred at origin and radius 1 is defined as:
* \f$x^2+y^2+z^2=r^2=1\f$ and if the \f$r^2<1\f$, point lies within the sphere
* else, outside.
*
* \param[in] eta learning rate (optional, default=0.01)
*/
void test3(double eta = 0.01) {
adaline ada(6, eta); // 2 features
const int N = 100; // number of sample points
std::vector<double> X[N];
int Y[N]; // corresponding y-values
// generate sample points in the interval
// [-range2/100 , (range2-1)/100]
int range = 200; // sample points full-range
int range2 = range >> 1; // sample points half-range
for (int i = 0; i < N; i++) {
double x0 = ((std::rand() % range) - range2) / 100.f;
double x1 = ((std::rand() % range) - range2) / 100.f;
double x2 = ((std::rand() % range) - range2) / 100.f;
X[i] = {x0, x1, x2, x0 * x0, x1 * x1, x2 * x2};
Y[i] = ((x0 * x0) + (x1 * x1) + (x2 * x2)) <= 1.f ? 1 : -1;
}
std::cout << "------- Test 3 -------" << std::endl;
std::cout << "Model before fit: " << ada << std::endl;
ada.fit(X, Y);
std::cout << "Model after fit: " << ada << std::endl;
int N_test_cases = 5;
for (int i = 0; i < N_test_cases; i++) {
double x0 = ((std::rand() % range) - range2) / 100.f;
double x1 = ((std::rand() % range) - range2) / 100.f;
double x2 = ((std::rand() % range) - range2) / 100.f;
int predict = ada.predict({x0, x1, x2, x0 * x0, x1 * x1, x2 * x2});
std::cout << "Predict for x=(" << x0 << "," << x1 << "," << x2
<< "): " << predict;
int expected_val = ((x0 * x0) + (x1 * x1) + (x2 * x2)) <= 1.f ? 1 : -1;
assert(predict == expected_val);
std::cout << " ...passed" << std::endl;
}
}
/** Main function */
int main(int argc, char **argv) {
std::srand(std::time(nullptr)); // initialize random number generator
double eta = 0.1; // default value of eta
if (argc == 2) // read eta value from commandline argument if present
eta = strtof(argv[1], nullptr);
test1(eta);
std::cout << "Press ENTER to continue..." << std::endl;
std::cin.get();
test2(eta);
std::cout << "Press ENTER to continue..." << std::endl;
std::cin.get();
test3(eta);
return 0;
}