Jovian
⭐️
Sign In

The Extra Softmax, Max Entropy, and Exploring the Entropy Tradeoff

In [1]:
%pylab
%matplotlib inline
import torch
Using matplotlib backend: TkAgg Populating the interactive namespace from numpy and matplotlib
In [2]:
import jovian

Purpose

We've been seeing that using an extra softmax helps our method. Here I'm going to show what that extra softmax is doing -- it simulates maximum entropy by pushing the target very close to the center of the simplex -- and propose an experiment for testing the effect of the target entropy on our method.

The "extra" softmax simulates max entropy

Let's load up the data and take a look. We'll calculate the vector from our target to the center of the simplex (max entropy point) and use it to compare the original targets to the softened ones.

In [3]:
# These are the class data means from correct predictions
p = np.load('../moving-target/mean-correct-targets.npy')
p = torch.from_numpy(p).float()
# uniform distribution (center of the simplex)
u = torch.ones(200) / 200
# the extra softmax
q = p.softmax(1)

Now we will check to make sure that the additional softmax is moving the target along the line from target to center.

In [4]:
p1, q1 = p[0], q[0]
x = u - p1
z = q1 - p1
# find the angle between x and z
theta = z @ x / (z.norm() * x.norm())
print(theta)
tensor(1.0000)

The angle is \(\theta=1\), so they lie on the same line. Now we'll check how far along that line the extra softmax pushes things.

In [5]:
tau = z.norm() / x.norm()
print(tau)
tensor(0.9924)

We see that \(\tau=\frac{\|z\|}{\|x\|}>0.99\), so the extra softmax is pushing the target most of the way to the center, simulating max entropy. To verify, look at the original and the softmaxed targets.

In [6]:
p1
Out[6]:
tensor([7.9400e-01, 1.6753e-02, 5.6975e-02, 3.1032e-05, 7.8424e-04, 7.8507e-05,
        1.0782e-04, 1.5982e-02, 2.2106e-04, 2.9301e-05, 2.5673e-04, 1.0770e-04,
        1.2659e-05, 3.7111e-05, 3.0411e-05, 5.9611e-05, 1.8088e-04, 3.6125e-05,
        7.1499e-05, 1.0800e-04, 7.6869e-05, 8.3567e-05, 1.0500e-02, 4.0368e-04,
        2.4693e-03, 5.6095e-05, 1.3125e-04, 7.1096e-05, 2.4941e-04, 3.1038e-04,
        1.4998e-03, 1.2103e-03, 4.9177e-04, 1.6546e-04, 5.6454e-05, 1.5427e-04,
        4.0933e-05, 2.5171e-04, 6.2569e-05, 4.1434e-05, 4.8491e-05, 2.7602e-05,
        6.3941e-05, 1.0753e-03, 2.1339e-02, 2.2108e-03, 3.9450e-05, 5.6008e-05,
        6.8373e-04, 3.7812e-04, 3.5839e-04, 1.2005e-03, 2.3709e-04, 2.8695e-05,
        1.1953e-04, 4.5396e-05, 4.2419e-05, 1.7385e-03, 1.8778e-04, 4.4532e-04,
        6.8363e-04, 8.6202e-04, 1.4244e-04, 1.6057e-04, 1.6631e-03, 4.6013e-04,
        3.2180e-05, 1.5523e-04, 1.9438e-04, 2.5988e-05, 8.4253e-03, 2.4351e-02,
        9.5351e-05, 8.7904e-05, 5.3940e-05, 6.1045e-05, 1.1324e-04, 6.1726e-05,
        4.8994e-05, 3.0583e-05, 4.5530e-05, 5.5444e-05, 8.9471e-05, 3.3659e-04,
        9.1414e-05, 5.1669e-03, 7.1814e-04, 5.2104e-05, 2.0429e-04, 1.2744e-03,
        2.3620e-04, 1.8293e-04, 1.1872e-04, 5.4327e-05, 3.1723e-05, 5.1693e-05,
        1.1794e-04, 4.6184e-05, 9.9657e-05, 6.9193e-03, 5.4289e-04, 5.6662e-05,
        1.6019e-04, 1.1137e-04, 5.7515e-05, 6.8891e-04, 1.7928e-04, 2.2272e-04,
        3.7473e-05, 8.7746e-05, 2.3163e-05, 2.7916e-05, 2.1004e-05, 3.4099e-05,
        1.4895e-05, 8.3625e-05, 2.1907e-05, 4.4227e-05, 7.6754e-05, 7.8252e-05,
        4.3466e-05, 3.5467e-05, 1.6143e-05, 3.5147e-05, 2.0239e-05, 5.2737e-05,
        2.6417e-05, 4.9156e-05, 3.8748e-05, 3.6766e-05, 3.2300e-05, 2.5248e-05,
        2.6903e-05, 4.9277e-05, 3.9887e-04, 1.7793e-04, 3.1454e-04, 4.7852e-05,
        4.1827e-05, 9.5474e-05, 8.8998e-05, 1.1523e-03, 2.2142e-04, 2.0178e-04,
        9.8395e-05, 7.3821e-05, 4.5000e-05, 3.2334e-05, 1.9521e-04, 2.2499e-05,
        2.7294e-05, 7.1700e-05, 3.3853e-04, 2.2040e-04, 3.2905e-04, 6.4861e-05,
        4.6680e-05, 4.9442e-05, 3.1688e-05, 2.6381e-05, 3.3573e-05, 4.6597e-05,
        3.0692e-05, 2.8761e-05, 1.2577e-05, 2.0695e-05, 3.9407e-05, 6.4658e-05,
        3.0789e-05, 4.3688e-05, 2.8921e-05, 6.4651e-05, 8.4022e-05, 4.5606e-05,
        5.8428e-05, 9.0680e-05, 9.4172e-05, 1.7482e-04, 1.1109e-04, 2.1059e-05,
        5.3302e-05, 5.2510e-05, 7.6092e-05, 1.8222e-04, 1.7560e-04, 1.3316e-03,
        3.1980e-05, 1.2821e-04, 1.2560e-04, 5.6164e-05, 6.3314e-05, 2.1609e-05,
        1.4433e-04, 3.3197e-05, 2.7223e-04, 1.8487e-04, 1.1031e-04, 6.2287e-05,
        1.3077e-04, 5.0753e-05])
In [7]:
q1
Out[7]:
tensor([0.0110, 0.0050, 0.0053, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0051,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0051,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050])

The values aren't actually 0.005, torch just rounds them for disply. Here are some of the actual values.

In [8]:
[a.item() for a in q1[:20]]
Out[8]:
[0.01098309550434351,
 0.005048607010394335,
 0.005255808588117361,
 0.004964884370565414,
 0.0049686250276863575,
 0.004965119995176792,
 0.004965265281498432,
 0.00504471268504858,
 0.0049658278003335,
 0.004964875988662243,
 0.004966004751622677,
 0.004965265281498432,
 0.004964792635291815,
 0.004964914638549089,
 0.004964881110936403,
 0.004965025931596756,
 0.004965628497302532,
 0.004964909516274929,
 0.004965085536241531,
 0.004965266212821007]

Since one softmax takes us almost all the way, it makes sense that two softmaxs is a bad idea. Let's check it out:

In [15]:
qq = q.softmax(1)
print(qq[0])
tensor([0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050])

Yep. That makes the target more or less uniform.

Experiment Proposal

The softmax takes the distribution very close to the max entropy point. I think we want to explore somewhat the effect that the entropy of the target has on our method. I propose running a few experiments where we vary the entropy along the line from target to max entropy by choosing a few scalar values. As an example, we could try \(0.1, 0.5, 0.9, 0.99\). Those targets would look like this:

In [9]:
p_1 = p + 0.1 * x
p_5 = p + 0.5 * x
p_9 = p + 0.9 * x
p_99 = p + 0.99 * x
In [10]:
p_1[0]
Out[10]:
tensor([7.1510e-01, 1.5578e-02, 5.1777e-02, 5.2793e-04, 1.2058e-03, 5.7066e-04,
        5.9704e-04, 1.4884e-02, 6.9895e-04, 5.2637e-04, 7.3106e-04, 5.9693e-04,
        5.1139e-04, 5.3340e-04, 5.2737e-04, 5.5365e-04, 6.6279e-04, 5.3251e-04,
        5.6435e-04, 5.9720e-04, 5.6918e-04, 5.7521e-04, 9.9500e-03, 8.6331e-04,
        2.7223e-03, 5.5049e-04, 6.1813e-04, 5.6399e-04, 7.2447e-04, 7.7934e-04,
        1.8498e-03, 1.5893e-03, 9.4259e-04, 6.4892e-04, 5.5081e-04, 6.3884e-04,
        5.3684e-04, 7.2654e-04, 5.5631e-04, 5.3729e-04, 5.4364e-04, 5.2484e-04,
        5.5755e-04, 1.4678e-03, 1.9705e-02, 2.4897e-03, 5.3551e-04, 5.5041e-04,
        1.1154e-03, 8.4031e-04, 8.2255e-04, 1.5804e-03, 7.1338e-04, 5.2583e-04,
        6.0757e-04, 5.4086e-04, 5.3818e-04, 2.0646e-03, 6.6900e-04, 9.0079e-04,
        1.1153e-03, 1.2758e-03, 6.2819e-04, 6.4451e-04, 1.9968e-03, 9.1412e-04,
        5.2896e-04, 6.3970e-04, 6.7495e-04, 5.2339e-04, 8.0828e-03, 2.2416e-02,
        5.8582e-04, 5.7911e-04, 5.4855e-04, 5.5494e-04, 6.0192e-04, 5.5555e-04,
        5.4409e-04, 5.2752e-04, 5.4098e-04, 5.4990e-04, 5.8052e-04, 8.0293e-04,
        5.8227e-04, 5.1502e-03, 1.1463e-03, 5.4689e-04, 6.8386e-04, 1.6469e-03,
        7.1258e-04, 6.6464e-04, 6.0685e-04, 5.4889e-04, 5.2855e-04, 5.4652e-04,
        6.0615e-04, 5.4157e-04, 5.8969e-04, 6.7274e-03, 9.8860e-04, 5.5100e-04,
        6.4417e-04, 6.0023e-04, 5.5176e-04, 1.1200e-03, 6.6135e-04, 7.0045e-04,
        5.3373e-04, 5.7897e-04, 5.2085e-04, 5.2512e-04, 5.1890e-04, 5.3069e-04,
        5.1341e-04, 5.7526e-04, 5.1972e-04, 5.3980e-04, 5.6908e-04, 5.7043e-04,
        5.3912e-04, 5.3192e-04, 5.1453e-04, 5.3163e-04, 5.1821e-04, 5.4746e-04,
        5.2377e-04, 5.4424e-04, 5.3487e-04, 5.3309e-04, 5.2907e-04, 5.2272e-04,
        5.2421e-04, 5.4435e-04, 8.5898e-04, 6.6014e-04, 7.8308e-04, 5.4307e-04,
        5.3764e-04, 5.8593e-04, 5.8010e-04, 1.5370e-03, 6.9928e-04, 6.8160e-04,
        5.8856e-04, 5.6644e-04, 5.4050e-04, 5.2910e-04, 6.7569e-04, 5.2025e-04,
        5.2457e-04, 5.6453e-04, 8.0468e-04, 6.9836e-04, 7.9614e-04, 5.5837e-04,
        5.4201e-04, 5.4450e-04, 5.2852e-04, 5.2374e-04, 5.3022e-04, 5.4194e-04,
        5.2762e-04, 5.2589e-04, 5.1132e-04, 5.1863e-04, 5.3547e-04, 5.5819e-04,
        5.2771e-04, 5.3932e-04, 5.2603e-04, 5.5819e-04, 5.7562e-04, 5.4105e-04,
        5.5259e-04, 5.8161e-04, 5.8475e-04, 6.5734e-04, 5.9998e-04, 5.1895e-04,
        5.4797e-04, 5.4726e-04, 5.6848e-04, 6.6400e-04, 6.5804e-04, 1.6984e-03,
        5.2878e-04, 6.1539e-04, 6.1304e-04, 5.5055e-04, 5.5698e-04, 5.1945e-04,
        6.2990e-04, 5.2988e-04, 7.4500e-04, 6.6638e-04, 5.9928e-04, 5.5606e-04,
        6.1769e-04, 5.4568e-04])
In [11]:
p_5[0]
Out[11]:
tensor([0.3995, 0.0109, 0.0310, 0.0025, 0.0029, 0.0025, 0.0026, 0.0105, 0.0026,
        0.0025, 0.0026, 0.0026, 0.0025, 0.0025, 0.0025, 0.0025, 0.0026, 0.0025,
        0.0025, 0.0026, 0.0025, 0.0025, 0.0078, 0.0027, 0.0037, 0.0025, 0.0026,
        0.0025, 0.0026, 0.0027, 0.0032, 0.0031, 0.0027, 0.0026, 0.0025, 0.0026,
        0.0025, 0.0026, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0030, 0.0132,
        0.0036, 0.0025, 0.0025, 0.0028, 0.0027, 0.0027, 0.0031, 0.0026, 0.0025,
        0.0026, 0.0025, 0.0025, 0.0034, 0.0026, 0.0027, 0.0028, 0.0029, 0.0026,
        0.0026, 0.0033, 0.0027, 0.0025, 0.0026, 0.0026, 0.0025, 0.0067, 0.0147,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0026, 0.0025, 0.0025, 0.0025, 0.0025,
        0.0025, 0.0025, 0.0027, 0.0025, 0.0051, 0.0029, 0.0025, 0.0026, 0.0031,
        0.0026, 0.0026, 0.0026, 0.0025, 0.0025, 0.0025, 0.0026, 0.0025, 0.0025,
        0.0060, 0.0028, 0.0025, 0.0026, 0.0026, 0.0025, 0.0028, 0.0026, 0.0026,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0027,
        0.0026, 0.0027, 0.0025, 0.0025, 0.0025, 0.0025, 0.0031, 0.0026, 0.0026,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0026, 0.0025, 0.0025, 0.0025, 0.0027,
        0.0026, 0.0027, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025,
        0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0025, 0.0026, 0.0026, 0.0025,
        0.0025, 0.0025, 0.0025, 0.0026, 0.0026, 0.0032, 0.0025, 0.0026, 0.0026,
        0.0025, 0.0025, 0.0025, 0.0026, 0.0025, 0.0026, 0.0026, 0.0026, 0.0025,
        0.0026, 0.0025])
In [12]:
p_9[0]
Out[12]:
tensor([0.0839, 0.0062, 0.0102, 0.0045, 0.0046, 0.0045, 0.0045, 0.0061, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0056, 0.0045, 0.0047, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0046, 0.0046, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0046, 0.0066,
        0.0047, 0.0045, 0.0045, 0.0046, 0.0045, 0.0045, 0.0046, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0047, 0.0045, 0.0045, 0.0046, 0.0046, 0.0045,
        0.0045, 0.0047, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0053, 0.0069,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0050, 0.0046, 0.0045, 0.0045, 0.0046,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0052, 0.0046, 0.0045, 0.0045, 0.0045, 0.0045, 0.0046, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0046, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0046, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045, 0.0045,
        0.0045, 0.0045])
In [13]:
p_99[0]
Out[13]:
tensor([0.0129, 0.0051, 0.0055, 0.0050, 0.0050, 0.0050, 0.0050, 0.0051, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0051, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0052,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0052,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050, 0.0050,
        0.0050, 0.0050])
In [ ]:
jovian.commit()
[jovian] Saving notebook..