Arsenal 0-1 Nottingham Forest sent Manchester City to be crowned champions three times ahead of schedule.

At 0:30 Beijing time on May 21st, in the 37th round of Premier League, Arsenal played Nottingham Forest away. Avonii scored the only goal in the game. In the end, Arsenal lost 0-1 to Nottingham Forest and suffered two consecutive league defeats.

After the battle, Arsenal was 4 points behind Manchester City, the leader. As the gunners had only the last round left, Manchester City had won the Premier League title ahead of schedule, while Arsenal locked in the second place.

In the 20th minute, Odegard made a return error, and the Nottingham Forest midfielder took the ball to fight back. Avonii answered the teammate’s direct plug and broke into the right side of the penalty area. The goalkeeper Ramsdale quickly attacked, Gabriel chased back and grabbed the ball first, but kicked it at the opponent’s feet. Awoniyi took advantage of the situation and scored low. Nottingham Forest led 1-0!

In the second half, Arsenal, whose possession rate exceeded 80%, still couldn’t open the situation. On the contrary, the home team’s attack was more threatening. Arteta, the head coach of Arsenal, dispatched troops to try to make a comeback, but to no avail, he finally lost away.

After the game, Arsenal manager arteta congratulated Manchester City on winning the fifth league title in nearly six seasons, and admitted that he was very sad now. "We need to heal, and now it is too painful and sad. I have to find a way to cheer up the players. "

Editor: Zhou Yang

Editor: Zhou Shangdou

Audit: Feng Fei

Infrastructure for training AI to solve common problems

In order to train artificial intelligence models that can solve common problems, infrastructure is needed to provide support. These infrastructures are usually composed of hardware, software and tools to improve the efficiency and accuracy of model training. This article will introduce the infrastructure for training AI to solve common problems.

I. Hardware infrastructure

When training artificial intelligence models, it is usually necessary to use high-performance computing hardware to provide support. The following are several common hardware infrastructures:

  1. CPU: The central processing unit (CPU) is a general-purpose computing hardware, which can be used to run various types of software, including artificial intelligence models. Although the performance of CPU is relatively low, it is still useful in training small models or debugging.

  2. GPU: A graphics processor is a special computing hardware, which is usually used to process images and videos. Because of its highly parallel structure, GPU can provide higher computing performance than CPU when training artificial intelligence models, so it is widely used.

  3. TPU: Tensor processor is a kind of hardware specially used for artificial intelligence computing, developed by Google. The performance of TPU is higher than that of GPU, and it is suitable for large-scale artificial intelligence model training and reasoning.

Second, the software infrastructure

In addition to hardware infrastructure, some software tools are needed to support the training of artificial intelligence model. The following are some common software infrastructures:

  1. Operating system: Artificial intelligence models usually need to run on an operating system, such as Linux, Windows or macOS.

  2. Development environment: Development environment usually includes programming language, editor and integrated development environment (IDE) for writing and testing artificial intelligence models. Common development environments include Python, TensorFlow, PyTorch and Jupyter Notebook.

  3. Frames and libraries: Frames and libraries provide some common artificial intelligence model algorithms and data processing tools, making model development and training more convenient. Common frameworks and libraries include TensorFlow, PyTorch, Keras and Scikit-Learn.

Third, the tool infrastructure

In addition to the hardware and software infrastructure, some tools are needed to support the training of artificial intelligence models. The following are several common tool infrastructures:

Dataset tool: Dataset tool is used to process and prepare training datasets, such as data cleaning, preprocessing, format conversion, etc. Common data set tools include Pandas, NumPy and SciPy.

2 Visualization tools: Visualization tools are used to visualize the training process and results to help users better understand the performance and behavior of the model. Common visualization tools include Matplotlib, Seaborn and Plotly.

Automatic parameter tuning tool: The automatic parameter tuning tool is used to optimize the parameters of the model to improve the performance and accuracy of the model. Common automatic parameter tuning tools include Optuna, Hyperopt and GridSearchCV.

In short, training artificial intelligence models to solve common problems requires the use of a variety of infrastructures, including hardware, software and tools. These infrastructures are designed to improve the efficiency and accuracy of model training, so that the model can better solve various practical problems. In practical application, users need to choose the appropriate infrastructure according to specific requirements and data characteristics, and design and implement it accordingly.