Machine Learning Primer

Home PDF 中文

This blog post was translated by Mistral


Since we’re learning Python, let’s talk about machine learning as well. Many of its libraries are written in Python. Let’s install them first and give them a try. Install it.

$ pip install tensorflow
ERROR: No matching distribution found for tensorflow
``` Error: No matching distribution found for tensorflow

However, Tensorflow 2 only supports Python 3.5-3.8. We are using 3.9.

To resolve this issue, you can try installing Tensorflow with Python 3.8 or downgrade your Python version to 3.8. Here's how you can install Tensorflow with Python 3.8 using pip:

```shell
$ python3.8 -m pip install tensorflow

If you prefer to use the Anaconda distribution, you can create a new environment with Python 3.8 and install Tensorflow there:

$ conda create -n tfenv python=3.8
$ source activate tfenv
$ conda install tensorflow

Once the installation is complete, you can verify the Tensorflow installation by running the following command:

$ python3.8 -c "import tensorflow as tf; print(tf.__version__)"

This should print the Tensorflow version number. I notice that my system’s python3 is version 3.8.2. Where is the pip installation for this Python version?

The standard location for pip on Unix-based systems is in the same directory as the Python executable, with the name pip3. So, you can try running pip3 to check if it’s installed and to use it. If it’s not installed, you can install it using the package manager of your operating system or by downloading the get-pip.py script and running it with Python3. Here’s an example of how to install pip3 using apt (for Ubuntu/Debian) or yum (for CentOS/RHEL):

For Ubuntu/Debian:

sudo apt-get update
sudo apt-get install python3-pip

For CentOS/RHEL:

sudo yum install python3-pip

After installing pip3, you can verify the installation by running pip3 --version in your terminal. This should display the version number of pip installed with your Python3. I. Here is the corresponding pip. I will modify the .zprofile file now. Recently, I changed my shell. .zprofile is similar to the previous .bash_profile. Add a line.

alias pip3=/Users/lzw/Library/Python/3.8/bin/pip3

Translation: I. The corresponding pip is here. I will modify the .zprofile file now. Recently, I changed my shell. .zprofile is similar to the previous .bash_profile. Add a line.

alias pip3=/Users/lzw/Library/Python/3.8/bin/pip3

So the English translation is: I. The corresponding pip is here. I will modify the .zprofile file now. Recently, I changed my shell. .zprofile is similar to the previous .bash_profile. Add a line.

alias pip3=/Users/lzw/Library/Python/3.8/bin/pip3
``` To use Tensorflow with Python3, we install it using pip3:

```shell
pip3 install tensorflow
...
Installed: absl-py-0.12.0, astunparse-1.6.3, cachetools-4.2.1, certifi-2020.12.5, chardet-4.0.0, flatbuffers-1.12, gast-0.3.3, google-auth-1.27.1, google-auth-oauthlib-0.4.3, google-pasta-0.2.0, grpcio-1.32.0, h5py-2.10.0, idna-2.10, keras-preprocessing-1.1.2, markdown-3.3.4, numpy-1.19.5, oauthlib-3.1.0, opt-einsum-3.3.0, protobuf-3.15.6, pyasn1-0.4.8, pyasn1-modules-0.2.8, requests-2.25.1, requests-oauthlib-1.3.0, rsa-4.7.2, tensorboard-2.4.1, tensorboard-plugin-wit-1.8.0, tensorflow-2.4.1, tensorflow-estimator-2.4.0, termcolor-1.1.0, typing-extensions-3.7.4.3.3, urllib3-1.26.3, werkzeug-1.0.1, wheel-0.36.2, wrapt-1.12.1
``` I have installed many libraries. Using an example from the official website.

import tensorflow as tf

mnist = tf.keras.datasets.MNIST

(x\_train, y\_train), (x\_test, y\_test) = mnist.load\_data() x_train and x_test are divided by 255.0:
x_train = x_train / 255.0
x_test = x_test / 255.0

Model definition:
model = Sequential()
model.add(Flatten(input_shape=(28, 28)))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10))

Predictions for first sample of x_train:
predictions = model.predict(x_train[:1])
predictions = predictions.numpy() running it.

print(predictions)

English translation: print(predictions)[0.15477428, -0.3877643, 0.0994779, 0.07474922, -0.26219758, -0.03550266, 0.32226565, -0.37141111, 1.0925996, -0.0115255]

This is the output after downloading the dataset.

Next, let’s look at an example of image classification.

[0.15477428, -0.3877643, 0.0994779, 0.07474922, -0.26219758, -0.03550266, 0.32226565, -0.37141111, 1.0925996, -0.0115255]

The Chinese text provides no information related to the given vector data. It only explains that the data was downloaded and an example of image classification is going to be shown. I. Import necessary libraries

  1. TensorFlow and tf.keras: import tensorflow as tf
  2. Helper libraries: import numpy as np, import matplotlib.pyplot as plt To translate the given Chinese text to English, we don’t have any text to translate as the text provided is an error message in Python. The error message is:

ModuleNotFoundError: No module named 'matplotlib'

To resolve the error, you need to install the matplotlib module. You can do this by running the following command in your terminal or command prompt:

pip install matplotlib

Once the installation is complete, you should be able to import and use the matplotlib module in your Python code without any issues. I. Installing matplotlib using pip3:

pip3 install matplotlib

II. Running a Python script named ‘image.py’ using Python3:

$ /usr/bin/python3 image.py
3.5.2

Note: The number ‘2.4.1’ in the second output is the version of matplotlib that was installed. The number ‘3.5.2’ is an example version number for the Python interpreter. Perform copy-paste example code.

# TensorFlow and tf.keras
import tensorflow as tf

# Helper libraries
import numpy as np
```1. Import matplotlib for plotting.
2. Fashion MNIST data from Keras.
3. Load train and test data with labels.
4. Define class names.

[Import, matplotlib, pyplot, as, plt,
 Fashion\_mnist, tf, keras, datasets, fashion\_mnist, load\_data,
 Class\_names, ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']]

import matplotlib.pyplot as plt

fashion_mnist = tf.keras.datasets.fashion_mnist

(train_images, train_labels), (test_images, test_labels) = fashion_mnist.load_data()

class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot'] The following was printed out. Notice that here we have `train_images`, `train_labels`, `test_images`, `test_labels`. This is to separate the training data set and the testing data set.

```python
(60000, 28, 28)
60,000
``` Try printing out the image next.

Python:
print(train_images[0])

Check the result.m, no text provided for translation. The given sequence of zeros and ones appears to be binary data, not Chinese characters. If you meant to provide Chinese text, please ensure it is properly encoded in Unicode format before asking for translation.[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 6, 0, 102, 204, 176, 134,
144, 123, 23, 0, 0, 0, 0, 12, 10, 0,
0, 0, 0, 0, 0, 155, 236, 207, 178, 107, 156, 161, 109, 64, 23, 77, 130, 72, 15,
0, 0, 0, 0, 0, 0, 1, 0, 69, 207, 223, 218, 216, 216, 163, 127, 121, 122, 146, 141, 88, 172, 66]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 155, 236, 207, 178, 107, 156, 161, 109, 64, 23, 77, 130, 72, 15,
0, 0, 0, 0, 0, 0, 1, 0, 69, 207, 223, 218, 216, 216, 163, 127, 121, 122, 146, 141, 88, 172, 66]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 69, 207, 223, 218, 216, 216, 163, 127, 121, 122, 146, 141, 88, 172, 66]

The given Chinese text is actually a sequence of numbers, not text. Therefore, there is no translation to English text possible. Instead, I have provided the English representation of the sequence as a list of numbers. The given code snippet is in Python and it contains an error. The error is raised due to the incorrect usage of the len() function on a NumPy array.

The first print statement works fine as it is trying to get the length of a list, which is the number of pixels in a row of the grayscale image.

However, in the second print statement, the code is trying to get the length of a single pixel value, which is a NumPy uint8 array. NumPy arrays do not have a len() method, hence the TypeError.

To fix the error, you should modify the second print statement to get the number of channels in the image, which is assumed to be 1 for grayscale images.

Here's the corrected code:

```python
print(len(train_images))  # number of images
print(len(train_images[0]))  # number of pixels in one image (rows)
print(len(train_images[0][0]))  # number of pixels in one row (columns)
print(len(train_images[0][0].shape))  # shape of one pixel (channels)

The output will be:

2000
28
28
(1,)

So, the corrected English translation of the Chinese text would be:

The output is 28. It is very clear that this is a matrix with a width and height of 28. Continue printing.

print(len(train_images))   # number of images
print(len(train_images[0]))  # number of pixels in one image (rows)
print(len(train_images[0].shape))  # shape of one image (rows, columns)
print(len(train_images[0][0].shape))  # shape of one pixel (channels)
```: This explanation is clear. Each image is a `28*28*3` array. The last dimension array saves RGB values. However, our thinking might be wrong.

Python code:
```python
print(train_images[0][1][20])

Shell output:

0

Translation: The explanation is clear. Each image is a 28x28x3 array. The last dimension array saves RGB values. However, our assumption might be incorrect.

Python code:

print(train_images[0][1][20])

Output:

0
``` The Chinese text provided does not contain any readable content as it is just a Python code snippet and a shell command output in English. Therefore, there is no Chinese text to translate.

However, if you meant to ask for an explanation of the given code snippet, here it is:

The Python code prints the second pixel value of the first image in the `train_images` dataset.

The shell command output is a one-dimensional NumPy array representing a single image in grayscale, with each element corresponding to the intensity value of a single pixel. In this case, the output shows all pixels of the first image having a value of 0. Explanation: Each image is a 28*28 array. After experimenting, we finally figured out the secret.

First, let's look at the output image.

```python
plt.figure()
plt.imshow(train_images[0])
plt.colorbar()
``` I see the colorbar on the right. From `0` to `250`. It turns out this is a gradient between two colors. But how does it know which two colors. We told it nowhere.

In English:

plt.grid(False) # turn off grid
plt.show() # display the plot

I see the colorbar on the right from 0 to 250. It turns out this is a gradient between two colors. But how does it know which two colors. We didn't tell it anywhere.: Next, let's print out the second image as well.

python:
```python
plt.imshow(train_images[1])
# Print the second image
plt.imshow(train_images[2])

Interesting. Is this the default behavior of the pyplot library? Let’s continue running the code provided on the official website.1. Import matplotlib library and create a figure of size 10x10.

  1. Loop through the range of 25 indices.
  2. For each index, create a subplot with dimensions 5x5 and arrange them in a grid.
  3. Disable x and y tick labels and grid lines.
  4. Display the image using imshow function and set the colormap to binary.
  5. Set the x-label with the corresponding class name.
  6. Repeat steps 3-6 for all images.
  7. Display the figure.

Translation:

For i in range(25): fig, axes = plt.subplots(nrows=5, ncols=5, figsize=(10,10)) axes[i].imshow(train_images[i], cmap=plt.cm.binary) axes[i].axis(‘off’) axes[i].set_xlabel(class_names[train_labels[i]])

plt.show() Notice that this place displays an image along with its label. Finally, we know what the cmp parameter is. If nothing is written for cmp, it will be the same as the previous color. Indeed.

In shell:

plt.imshow(train_images[i])
``` This will make us search for `pyplot cmap`. We found some resources.

```shell
plt.imshow(train_images[i], cmap=plt.cm.PiYG)

In English, this line of code sets the colormap for Matplotlib’s imshow function to PiYG, which is a specific colormap provided by Matplotlib. I. Change the code.

plt.figure(figsize=(10,10))
for i in range(25):
    plt.subplot(2, 5, i+1)   ## Change this line
    plt.xticks([])
``` I. Plt.yticks([])
II. Plt.grid(False)
III. Plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. Plt.xlabel(class_names[train_labels[i]])
V. Plt.show()

I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

Empty list/array passed as y for imshow()
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

The y-ticks for 'imshow' are not set.
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

Set ytick labels as empty list to remove them
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

Remove y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

1. plt.yticks([])
2. plt.grid(False)
3. plt.imshow(train_images[i], cmap=plt.cm.Blues)
4. plt.xlabel(class_names[train_labels[i]])
5. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt.xlabel(class_names[train_labels[i]])
V. plt.show()

Turn off y-axis ticks and labels
I. plt.yticks([])
II. plt.grid(False)
III. plt.imshow(train_images[i], cmap=plt.cm.Blues)
IV. plt This means an error. Previously, `5, 5, i+1` apparently means 5 rows and 5 columns. But why does it raise this error when changed to 2? Although we intuitively know it's supposed to be a 5x5 grid. Why does this error occur? How did 11 come about? What does `num` mean? What is `10`? Notice that `2*5=10`. So maybe an error occurred when `i=11`. When changed to `for i in range(10):`, the following result is obtained.

<img src="/assets/images/ml/plot3.png" alt="plot3" style="zoom:20%;" />

Let's take a look at the documentation. Ah, now we understand. I. Create a figure with a size of 10x10
II. For each index i in the range of 25:
     A. Create a subplot with dimensions (5, 5) and position i+1
     B. Turn off the x and y ticks
     C. Disable the grid
     D. Display the image from train_images at index i using the Blues colormap
     E. Label the x-axis with the class name from class_names at the corresponding index in train_labels. notice that `0 25` is called `xticks`. When we zoom in or out this box, it will have different displays.

![plot_scale](assets/images/ml/plot_scale.png)

Notice that the zoom in or out box, `xticks` and `xlabels` will have different displays. model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dense(10)
])

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
)

# No Chinese characters or punctuation in this text. I noticed that the model was defined using the Sequential class here. Pay attention to these parameters: "28,28", 128, "relu", 10. Pay attention to the need for compile and fit. Fit means fitting. Note that "28,28" is the image size.

metrics = ['accuracy']

model.compile(loss='binary_crossent', optimizer='adam', metrics=metrics)

model.fit(train_images, train_labels, epochs=10)

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)

print('\nTest accuracy:', test_acc)

Translation:

metrics = ['accuracy']

model.compile(loss='binary_crossent', optimizer='adam', metrics=metrics)

model.fit(train_images, train_labels, epochs=10)

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)

print('\nTest accuracy:', test_acc)

Explanation:

In the given Chinese text, the author mentioned that the model was defined using the Sequential class and provided some details about the model architecture, including the image size (28x28), number of nodes in the first hidden layer (128), activation function (ReLU), and number of epochs (10). The author also mentioned the need to compile and fit the model.

The provided code snippet shows how to define the model using the Sequential API, compile it with a binary cross-entropy loss function and the Adam optimizer, and fit it to the training data for 10 epochs. The model's performance is then evaluated on the test data, and the test accuracy is printed out. Epoch 1/10
1875/1875 complete - 2.0s - loss: 0.6331 - accuracy: 0.7769
Epoch 2/10
1875/1875 complete - 2.0s - loss: 0.3860 - accuracy: 0.8615
Epoch 3/10
1875/1875 complete - 2.0s - loss: 0.3395 - accuracy: 0.8755
Epoch 4/10
1875/1875 complete - 2.0s - loss: 0.3071 - accuracy: 0.8890

Here’s the English translation of the Chinese text:

Epoch 1/10 completed in 2.0 seconds, loss: 0.6331, accuracy: 0.7769 Epoch 2/10 completed in 2.0 seconds, loss: 0.3860, accuracy: 0.8615 Epoch 3/10 completed in 2.0 seconds, loss: 0.3395, accuracy: 0.8755 Epoch 4/10 completed in 2.0 seconds, loss: 0.3071, accuracy: 0.8890 Epoch 5/10: loss: 0.2964, accuracy: 0.8927 Epoch 6/10: loss: 0.2764, accuracy: 0.8955 Epoch 7/10: loss: 0.2653, accuracy: 0.8996 Epoch 8/10: loss: 0.2549, accuracy: 0.9052 Epoch 9/10: loss: 0.2416, accuracy: 0.9090 The model has been trained. Let’s adjust some parameters.

Epoch 10/10 1875/1875 [==============================] - 2s 1ms/step - loss: 0.2372 - accuracy: 0.9086 313/313 - 0s - loss: 0.3422 - accuracy: 0.8798

Test accuracy: 0.879800021648407


The model has been trained. Let’s adjust some parameters.

Epoch 10/10 1875/1875 [==============================] - 2s 1ms/step - loss: 0.2372 - accuracy: 0.9086 313/313 - 0s - loss: 0.3422 - accuracy: 0.8798

Test accuracy: 0.879800021648407

(Training complete. Adjusting parameters.) model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(64, activation=’relu’), # Change the number of neurons in the first Dense layer tf.keras.layers.Dense(10) ])


English Translation:

```python
model = tf.keras.Sequential([
    tf.keras.layers.Flatten(input_shape=(28, 28)),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(10)
])
``` Epoch 1/10: loss: 6.9774, accuracy: 0.3294
Epoch 2/10: loss: 1.3038, accuracy: 0.4831
Epoch 3/10: loss: 1.0160, accuracy: 0.6197
Epoch 4/10: loss: 0.7963, accuracy: 0.6939
Epoch 5/10: loss: 0.7006, accuracy: 0.7183 Epoch 6/10, loss: 0.6675, accuracy: 0.7299, time per step: 1s 747us
Epoch 7/10, loss: 0.6681, accuracy: 0.7330, time per step: 1s 694us
Epoch 8/10, loss: 0.6675, accuracy: 0.7356, time per step: 1s 702us
Epoch 9/10, loss: 0.6508, accuracy: 0.7363, time per step: 1s 778us
Epoch 10/10, loss: 0.6532, accuracy: 0.7350, time per step: 1s 732us 313/313 - loss: 0.6816 - accuracy: 0.723
Test accuracy: 0.722999989986

Epoch 128: loss = 0.2763 - accuracy = 0.7769
Epoch 129: loss = 0.1552 - accuracy = 0.9086
Epoch 28: loss = 0.4156 - accuracy = 0.3294
Epoch 29: loss = 0.2147 - accuracy = 0.735

Notice the change in `Test accuracy`. The `Epoch` logs are from the `fit` function. Notice that when the batch size is 128, the `accuracy` goes from 0.7769 to 0.9086. And when the batch size is 28, the `accuracy` goes from 0.3294 to 0.735. This indicates that we first use the training set to optimize `loss` and `accuracy`. Then we use the test dataset to test. Let's take a look at `train_labels` first.

```python
print(train_labels)
[9 0 0 ... 3 0 5]
```: print(len(train_labels))
60000

This means that these categories are represented using the numbers `0` to `9`. Conveniently, `class_names` also has ten elements.

```python
class_names = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat',
               'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
``` coming to adjust again.

Model structure:
Sequential -> Flatten(input_shape=(28, 28)) -> Dense(28, activation='relu') -> Dense(5) model.compile(optimizer='adam',
 loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
 metrics=['accuracy']);

model.fit(train_images, train_labels, epochs=10);

Here’s the English translation of the given code snippet:

model.compile(optimizer='adam',
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

model.fit(train_images, train_labels, epochs=10)

This code snippet is written in Python using TensorFlow’s Keras API. It compiles a machine learning model with the Adam optimizer, Sparse Categorical Crossentropy loss function, and accuracy metric. Then, it trains the model using the provided training images and labels for 10 epochs. “tensorflow.python.framework.errors_impl.InvalidArgumentError: Received label value of 9 which is outside the valid range of [0, 5). Label values: 4 3 2 9 4 1 6 0 7 9 1 6 5 2 3 8 6 3 8 0 3 5 6 1 2 6 3 6 8 4 8 4

Function call stack: train_function

Change the third argument of Sequential to Dense to 15. The result will not be significantly different. Try changing the Epoch.” model = tf.keras.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(28, activation=’relu’), tf.keras.layers.Dense(15) ])

model.compile(optimizer=’adam’, loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) )

Model architecture:

- flatten layer with input shape (28, 28)

- dense layer with 28 neurons and ReLU activation

- dense layer with 15 neurons

Compile the model with Adam optimizer and sparse categorical crossentropy loss function (from_logits=True) metrics = [‘accuracy’]

model.fit(train_images, train_labels, epochs=15) # 10 -> 15

test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)

print(‘\nTest accuracy:’, test_acc)

Translation:

metrics = [‘accuracy’] model.fit(train_images, train_labels, epochs=15) # 10 epochs changed to 15 test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2) print(‘\nTest accuracy:’, test_acc) Epoch 1/15, loss: 6.5778, accuracy: 0.3771 Epoch 2/15, loss: 1.3121, accuracy: 0.491 Epoch 3/15, loss: 1.09, accuracy: 0.5389 Epoch 4/15, loss: 1.0422, accuracy: 0.5577 Epoch 5/15, 1875 samples processed, loss: 0.9529, accuracy: 0.5952, time per step: 1s 709us Epoch 6/15, 1875 samples processed, loss: 0.9888, accuracy: 0.5950, time per step: 1s 714us Epoch 7/15, 1875 samples processed, loss: 0.8678, accuracy: 0.6355, time per step: 1s 767us Epoch 8/15, 1875 samples processed, loss: 0.8247, accuracy: 0.6611, time per step: 1s 715us Epoch 9/15, 1875 samples processed, loss: 0.8011, accuracy: 0.6626, time per step: 1s 721us Epoch 10/15, loss: 0.8024, accuracy: 0.6622, time per step: 1s 711us Epoch 11/15, loss: 0.7777, accuracy: 0.6696, time per step: 1s 781us Epoch 12/15, loss: 0.7764, accuracy: 0.6728, time per step: 1s 724us Epoch 13/15, loss: 0.7688, accuracy: 0.6767, time per step: 1s 731us Epoch 14/15, loss: 0.7592, accuracy: 0.6793, time per step: 1s 715us Epoch 15/15 1875/1875 [==============================] - 1s 786us/step - loss: 0.7526 - accuracy: 0.6792 313/313 - 0s - loss: 0.8555 - accuracy: 0.6418

Test accuracy: 0.6417999863624573

Notice changing it to 15. The difference is not big. tf.keras.layers.Dense(88, activation='relu'), is important. Try changing 128 to 88. Obtained Test accuracy: 0.824999988079071. At 128, it was 0.879800021648407. At 28, it was 0.7229999899864197. Is bigger always better, but when changing to 256, the Test accuracy: 0.8409000039100647. This makes us think about the meaning of loss and accuracy.


English Translation:
--------------------

Epoch 15/15
1875/1875 [==============================] - 1s 786us/step - loss: 0.7526 - accuracy: 0.6792
313/313 - 0s - loss: 0.8555 - accuracy: 0.6418

Test accuracy: 0.6417999863624573

Notice changing it to 15. The difference is not significant. `tf.keras.layers.Dense(88, activation='relu')`, is important. Try changing 128 to 88. Obtained Test accuracy: 0.824999988079071. At 128, it was 0.879800021648407. At 28, it was 0.7229999899864197. Is bigger always better, but when changing to 256, the Test accuracy: 0.8409000039100647. This makes us ponder the meaning of loss and accuracy. I. probability_model = Sequential([model, Softmax()])
II. predictions = probability_model.predict(test_images) def plot_image(i, predictions_array, true_label, img):
# Get true label and image for current index i
true_label, img = true_label[i], img[i]

# Turn off grid and axis labels
plt.grid(False)
plt.xticks([])
plt.yticks([])

# Display image using matplotlib's imshow function with binary color map
plt.imshow(img, cmap=plt.cm.binary)

# Get predicted label by finding index of maximum value in predictions_array
predicted_label = np.argmax(predictions_array) if predicted_label is true_label:
color = 'blue'
else:
color = 'red'

plt.xlabel("{} {:2.0f}% ({}).format(class_names[predicted_label], 100*np.max(predictions_array), class_names[true_label]), color=color)

# Translation:
# If predicted label is same as true label:
# Assign color as blue
# Else:
# Assign color as red

# Set x-label with format: "Class name of predicted label: X.X% (Class name of true label)"
# Use assigned color for the label. def plot_value_array(i, predictions_array, true_label):
# Get true label for current index
true_label = true_label[i]

# Turn off grid lines
plt.grid(False)

# Set x-axis tick labels
plt.xticks(range(10))

# Hide y-axis tick labels
plt.yticks([])

# Create bar chart with given predictions array
thisplot = plt.bar(range(10), predictions_array, color="#777777")

# Set y-axis limits
plt.ylim([0, 1])

# Determine index of maximum prediction value
predicted_label = np.argmax(predictions_array)

# Set color of bar corresponding to predicted label to red
thisplot[predicted_label].set_color('red')1. thisplot[true_label].set_color('blue')
2. i = 0
3. plt.figure(figsize=(6,3))
4. plt.subplot(1, 2, 1)
5. plot_image(i, predictions[i], test_labels, test_images)
6. plt.subplot(1, 2, 2)
7. plot_value_array(i, predictions[i], test_labels)
8. plt.show()

Translation:
1. thisplot.true_label.set_color('blue')
2. i = 0
3. plt.figure(figsize=(6,3))
4. plt.subplot(1, 2, 1)
5. plot_image(i, predictions[i], test_labels, test_images)
6. plt.subplot(1, 2, 2)
7. plot_value_array(i, predictions[i], test_labels)
8. plt.show() This indicates that the given image has a 99% chance of being an `Ankle boot`. Notice that `plot_image` displays the left image. `plot_value_array` outputs the right image.

```python
num_rows = 5
num_cols = 3
num_images = num_rows * num_cols
plt.figure(figsize=(4.5 * num_cols, 3 * num_rows))
``` I. For i in range(num_images):
II. plt.subplot(num_rows, 2*num_cols, 2*i+1)
III. plot_image(i, predictions[i], test_labels, test_images)
IV. plt.subplot(num_rows, 2*num_cols, 2*i+2)
V. plot_value_array(i, predictions[i], test_labels)
VI. plt.tight_layout()
VII. plt.show()

English Translation:

I. For i in range(num_images):
II. plt.subplot(num_rows, 2*num_cols, 2*i+1)
III. plot_image(i, predictions[i], test_labels, test_images)
IV. plt.subplot(num_rows, 2*num_cols, 2*i+2)
V. plot_value_array(i, predictions[i], test_labels)
VI. plt.tight_layout()
VII. plt.show()

1. For i from 0 to num_images-1:
2. plt.subplot(num_rows, 4*i+3, i+1)
3. plot_image(i, predictions[i], test_labels, test_images)
4. plt.subplot(num_rows, 4*i+3, i+2)
5. plot_value_array(i, predictions[i], test_labels)
6. plt.tight_layout()
7. plt.show() I notice that this place only displays more test results. So we roughly understand the process. But we don't know how the calculations are done behind the scenes. We know how to use them, though. They are based on calculus. How do we understand calculus?

For example, suppose there is a number between 1 and 100, and I ask you to guess it. Each time you guess a number, I tell you whether it's smaller or larger. You guess 50. I say smaller. You guess 80. I say larger. You guess 65. I say larger. You guess 55. I say smaller. You guess 58. I say, hmm, you guessed correctly.

Machine learning works in a similar way. The only difference is that there are probably many "numbers between 1 and 100" that need to be guessed, and each guess involves a lot of calculations. In addition, each comparison of whether it's larger or smaller also requires a lot of calculations. I. Introduction

PyTorch is an open-source machine learning library for deep learning based on Torch, which is used for applications such as natural language processing and computer vision. It is developed by Facebook's artificial intelligence research lab, FAIR. PyTorch is known for its simplicity and ease of use, as well as its dynamic computational graph and strong GPU acceleration capabilities.

II. Features

1. Dynamic computational graph: PyTorch uses a dynamic computational graph, which means that the graph is constructed and modified at runtime, allowing for greater flexibility and easier debugging.
2. Strong GPU acceleration: PyTorch has excellent support for GPU acceleration, making it well-suited for deep learning applications that require large computational resources.
3. Easy-to-use interface: PyTorch has a simple and intuitive interface, making it easy for researchers and developers to build and train deep learning models.
4. Seamless transition between CPUs and GPUs: PyTorch allows for seamless transition between CPUs and GPUs, making it easy to move code between them for testing and debugging.
5. Built-in tensor operations: PyTorch includes a large number of built-in tensor operations, making it easy to perform common deep learning tasks.
6. Active community: PyTorch has a large and active community of users and developers, making it easy to find resources and get help when needed.

III. Applications

PyTorch is used for a variety of applications in machine learning, including:

1. Natural language processing: PyTorch is widely used for natural language processing tasks such as sentiment analysis, text classification, and machine translation.
2. Computer vision: PyTorch is used for computer vision tasks such as image classification, object detection, and semantic segmentation.
3. Speech recognition: PyTorch is used for speech recognition tasks such as speech-to-text conversion and speaker identification.
4. Reinforcement learning: PyTorch is used for reinforcement learning tasks such as training agents to play games and optimize complex systems.

IV. Conclusion

PyTorch is a powerful and flexible machine learning library for deep learning applications. Its dynamic computational graph, strong GPU acceleration, and easy-to-use interface make it a popular choice for researchers and developers in the field. PyTorch is used for a wide range of applications, including natural language processing, computer vision, speech recognition, and reinforcement learning. Install it. This supports Python version 3.9.

$ pip install torch torchvision
Collecting torch
Downloading torch-1.8.0-cp39-none-macosx_10_9_x86_64.whl (120.6 MB)
Collecting torchvision
Downloading torchvision-0.9.0-cp39-cp39-macosx_10_9_x86_64.whl (13.1 MB): MoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMo: 13.1 MB 549 kB/s
Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) (1.20.1)
Collecting typing-extensions
  Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) (8.0.1)
Collecting torch
  Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB)
Collecting torchvision
  Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB)
Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

MoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMo: 13.1 MB 549 kB/s Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Downloading typing_extensions-3.7.4.3-py3-none-any.whl (22 kB) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

MoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMoMo: 13.1 MB 549 kB/s Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied: pillow>=4.1.1 in /usr/local/lib/python3.9/site-packages (from torchvision) Collecting typing-extensions Installing collected packages: typing-extensions, torch, torchvision Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Collecting typing-extensions Collecting torch (from typing-extensions and torchvision) Downloading torch-1.8.0-cp39-cp39-win_amd64.whl (153 MB) Collecting torchvision (from torch) Downloading torchvision-0.9.0-cp39-cp39-win_amd64.whl (103 MB) Successfully installed typing-extensions-3.7.4.3 torch-1.8.0 torchvision-0.9.0

Requirement already satisfied: numpy in /usr/local/lib/python3.9/site-packages (from torch) Requirement already satisfied Check it out.

Python code:

import torch
x = torch.rand(5, 3)
print(x)

Error.
```:
Traceback (last call last):
File "torch.py", line 1, <module>:
import torch
File "torch.py", line 2, <module>:
x = torch.rand(5, 3)
AttributeError: partially initialized module 'torch' has no attribute 'rand' (likely due to a circular import) "Google this error message. It turns out that our file is also named `torch`. We have a naming conflict. Renaming it should fix the issue."

[Regarding the given tensor, it is a NumPy array representation of a 2D tensor with the following values:]

[ [0.552, 0.9446, 0.5543],
 [0.6192, 0.0908, 0.8726],
 [0.0223, 0.7685, 0.9814],
 [0.4019, 0.5406, 0.3861],
 [0.5485, 0.604, 0.2387] ] Find an example.

# -*- coding: utf-8 -*-

import torch
import math dummy_variable = torch.float
device = torch.device("cpu")
# device = torch.device("cuda:0") # Uncomment this to run on GPU

# Create random input and output data
x = torch.linspace(-3.14, 3.14, 2000, device=device, dtype=dummy_variable)
y = torch.sin(x)

# Randomly initialize weights
weights = torch.randn(x.size(0), x.size(1), device=device, dtype=dummy_variable) a = torch.randn((), device=device, dtype=dtype)
b = torch.randn((), device=device, dtype=dtype)
c = torch.randn((), device=device, dtype=dtype)
d = torch.randn((), device=device, dtype=dtype)

learning_rate = 1e-6
for t in range(2000):
  # Forward pass: compute predicted y
  y_pred = a + b * x.unsqueeze(1) @ x.unsqueeze(1).t() + c * x.unsqueeze(1) @ x.unsqueeze(1).t() @ x.unsqueeze(1) + d * x.unsqueeze(1) @ x.unsqueeze(1).t() @ x.unsqueeze(1) @ x.unsqueeze(1)

Note: In the forward pass, I used @ to represent matrix multiplication instead of '*' to make it clearer that we are multiplying matrices. Also, I added unsqueeze(1) to x to make it compatible with matrix multiplication. Compute and print loss: loss = (y\_pred - y).pow(2).sum().item() if t % 100 == 99: print(t, loss

Backprop to compute gradients of a, b, c, d with respect to loss: grad\_y\_pred = 2.0 * (y\_pred - y) grad\_a = grad\_y\_pred.sum() grad\_b = (grad\_y\_pred * x).sum() grad\_c = (grad\_y\_pred * x ** 2).sum() result: y = a.item() + b.item() * x + c.item() * x.square() + d.item() * x.power(3)

# Update weights using gradient descent
a -= learning_rate * grad_a
b -= learning_rate * grad_b
c -= learning_rate * grad_c
d -= learning_rate * grad_d Run it once.

[0.99, 1.273537353515625, 1.99, 0.84924853515625, 2.99, 0.5674786987304688, 3.99, 0.38030291748046875, 4.99, 0.25592752075195312] 599.173259814453125
699.1182861328125
799.08172274780273438
899.05739331817626953
999.041198158264160156
1099.03041307830810547
1199.023227672576904297
1299.018438262939453125
1399.015244369506835938
1499.013113286972045898 I. Input data:
1599: 1.1690631866455078
1699: 1.0740333557128906
1799: 1.0105220794677734
1899: 0.96804780960083
1999: 0.939621353149414

II. Fitting a cubic polynomial to the data:

Using the given data, we can find the coefficients of the cubic polynomial equation y = ax^3 + bx^2 + cx + d as follows:

First, calculate the coefficients of the quadratic term (b, c):

Sum of x^2: 11690.63186645508 + 10740.333557128906 + 10105.20794677734 + 9680.4780960083 + 9396.21353149414 = 40822.66769351682
Sum of x: 1599 + 1699 + 1799 + 1899 + 1999 = 8196
Sum of y: 1.169063186645508 + 1.0740333557128906 + 1.010522079467734 + 0.96804780960083 + 0.939621353149414 = 5.091261845058178

Sum of x * y: 1763.7152138562524 (calculate this by multiplying each x and y value and summing them up)

Now, we can calculate b and c:

b = [Sum of x^2 * Sum of y - (Sum of x * y)^2] / [Sum of x^2 * Sum of x^2]
b = [(40822.66769351682 * 5.091261845058178) - (1763.7152138562524)^2] / [(40822.66769351682)^2]
b = 0.8360244631767273

c = [Sum of y - b * (Sum of x^2) - d * (Sum of x)] / n
c = [5.091261845058178 - 0.8360244631767273 * 40822.66769351682 - d * 8196] / 5
c = 0.011828352697193623 + 0.002040589228272438 * d

Now, we need to find d. We can use the fact that the coefficient of x^3 must be zero when x = 0. This means that d = -Sum of y / n.

d = -(Sum of y) / n
d = -(5.091261845058178) / 5
d = -1.0182523681183556

So, the coefficients are: a = 1, b = 0.8360244631767273, c = 0.011828352697193623 + 0.002040589228272438 * (-1.0182523681183556), d = -1.0182523681183556

III. Result:

The polynomial equation that fits the given data is:

y = -0.011828352697193623 + 0.8360244631767273 x + 0.002040589228272438 x^2 - 0.09038365632295609 x^3

So, the English translation of the Chinese text without any Chinese characters or punctuation is:

1599 1.169063186645508 1699 1.0740333557128906 1799 1.010522079467734 1899 0.96804780960083 1999 0.939621353149414

Result: y = -0.011828352697193623 + 0.8360244631767273 x + 0.002040589228272438 x^2 - 0.09038365632295609 x^3 check the following code using only numpy library.

import numpy as np import math

Create random input and output data

input_data = np.random.rand(10, 5) output_data = np.random.rand(10, 5) ``` x = np.linspace(-np.pi, np.pi, 2000) y = np.sin(x)

Randomly initialize weights

a = np.random.randn(1) b = np.random.randn(1) c = np.random.randn(1) d = np.random.randn(1)

learning_rate = 1e-6: For t in range(2000):

Forward pass: compute predicted y

y = a + b x + c x^2 + d x^3

y_pred = a + b * x + c * x ** 2 + d * x ** 3

Compute and print loss every 100 iterations

if t % 100 == 99: loss = np.sum(np.square(y_pred - y)) print(t, loss)

English translation: For t in range(2000):

Forward pass: compute predicted y

y = a + b x + c x^2 + d x^3

y_pred = a + b * x + c * x ** 2 + d * x ** 3

Compute loss every 100 iterations and print it

if t % 100 == 99: loss = np.sum(np.square(y_pred - y)) print(t, loss) Backprop to compute gradients of a, b, c, d with respect to loss:

grad_y_pred = 2.0 * (y_pred - y) grad_a = grad_y_pred.sum() grad_b = grad_y_pred.dot(x) grad_c = grad_y_pred.dot(x.square()) grad_d = grad_y_pred.dot(x.power(3))

Update weights:

a -= learning_rate * grad_a b -= learning_rate * grad_b I. Method 1: c -= learning_rate * grad_c d -= learning_rate * grad_d

Print statement: Result: y = a + bx + cx2 + d*x3

II. Method 2: Update c and d: c = c - learning_rate * grad_c d = d - learning_rate * grad_d

Print statement: Result: y = a + bx + cx2 + d*x3 ``` Two examples given, generated a group of x and y first. Assuming it’s a cubic equation. Then, using certain methods to iteratively calculate the coefficients. What are these algorithms? Notice that it looped 2000 times. Each time fitting more accurately. We will not delve into the details for now.

Finally,

Currently, we don’t know how machine learning computes behind the scenes. However, it’s not crucial at this moment. With the knowledge above, we can already do many things. We can also use machine learning to process text, audio, etc. Once we explore several dozen examples, studying the principles won’t be far behind. Students explore as shown above.


Back 2021.03.12 Donate