Tensorflow.js is a javascript library developed by Google to run and train machine learning model in the browser or in Node.js.
Adam optimizer (or Adaptive Moment Estimation) is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. The optimization technique is highly efficient in when working with a large sets of data and parameters. For more details refer to this article.
In Tensorflow.js tf.train.adam() function is used which creates tf.AdamOptimizer that uses the adam algorithm.
Syntax:
tf.train.adam (learningRate?, beta1?, beta2?, epsilon?)
 Parameters:
- learningRate: The learning rate to use for the Adam gradient descent algorithm. It is optional.
- beta1: The exponential decay rate for the 1st moment estimates. It is optional.
- beta2: The exponential decay rate for the 2nd moment estimates. It is optional.
- epsilon: A small constant for numerical stability. It is optional.
 Return Value: AdamOptimizer.
Example 1: A quadratic function is defined with taking x, y input tensors and a, b, c as random coefficients. Then we calculate the mean squared loss of the prediction and pass it to adam optimizer to minimize the loss with and change the coefficient ideally.
Javascript
// A cubic function with its coefficient l,m,n.const x = tf.tensor1d([0, 1, 2, 3]);const y = tf.tensor1d([1., 2., 5., 11.]);Â
const l = tf.scalar(Math.random()).variable();const m = tf.scalar(Math.random()).variable();const n = tf.scalar(Math.random()).variable();Â
// y = l * x^3 - m * x + n.const f = x => l.mul(x.pow(3)).sub(m.mul(x)).add(n);const loss = (pred, label) => pred.sub(label).square().mean();Â
const learningRate = 0.01;const optimizer = tf.train.adam(learningRate);Â
// Training the model and printing the coefficients.for (let i = 0; i < 10; i++) {Â Â Â optimizer.minimize(() => loss(f(x), y));Â Â console.log(Â Â Â Â Â `l: ${l.dataSync()}, m: ${m.dataSync()}, n: ${n.dataSync()}`);}Â
// Predictions are made.Â
const preds = f(x).dataSync();preds.forEach((pred, i) => {Â Â Â console.log(`x: ${i}, pred: ${pred}`);}); |
Output:
l: 0.5212615132331848, m: 0.4882013201713562, n: 0.9879841804504395 l: 0.5113212466239929, m: 0.49809587001800537, n: 0.9783468246459961 l: 0.5014950633049011, m: 0.5077731013298035, n: 0.969675600528717 l: 0.49185076355934143, m: 0.5170749425888062, n: 0.9630305171012878 l: 0.48247095942497253, m: 0.5257879495620728, n: 0.9595866799354553 l: 0.47345229983329773, m: 0.5336435437202454, n: 0.9596782922744751 l: 0.4649032950401306, m: 0.5403363704681396, n: 0.9626657962799072 l: 0.4569399356842041, m: 0.5455683469772339, n: 0.9677067995071411 l: 0.4496782124042511, m: 0.5491118431091309, n: 0.9741682410240173 l: 0.44322386384010315, m: 0.5508641004562378, n: 0.9816395044326782 x: 0, pred: 0.9816395044326782 x: 1, pred: 0.8739992380142212 x: 2, pred: 3.4257020950317383 x: 3, pred: 11.29609203338623
Example 2: Below is the code where we designed a simple model and we define an optimizer by tf.train.adam with the learning rate parameter of 0.01 and pass it to model compilation.
Javascript
// Importing the tensorflow.js libraryimport * as tf from "@tensorflow/tfjs"Â
// defining the modelconst model = tf.sequential({Â Â Â Â layers: [tf.layers.dense({ units: 1, inputShape: [12] })],});Â Â Â Â
// in the compilation we use tf.train.adam optimizer   const optimizer = tf.train.adam(0.001);model.compile({ optimizer: optimizer, loss: "meanSquaredError" },              (metrics = ["accuracy"]));   // evaluate the model which was compiled aboveconst result = model.evaluate(tf.ones([10, 12]), tf.ones([10, 1]), {    batchSize: 4,});   // print the resultresult.print(); |
Output:
Tensor
1.520470142364502
 Reference: https://js.tensorflow.org/api/3.6.0/#train.adam
