Skip to content

TensorFlow

In this article

Information

TensorFlow is an open-source software library for machine learning and artificial intelligence, developed by Google. This esteemed library offers a flexible and scalable ecosystem of tools, libraries, and community resources that empower researchers and developers to create and deploy applications with machine learning support. The architecture of TensorFlow is founded on computational graphs, comprising nodes (operations) and edges (data streams). Calculations are executed within sessions, which govern the flow of data and resource allocation.

TensorFlow. Key Features

  • Pre-configured environment: pre-installed latest stable version of TensorFlow, optimized NVIDIA drivers and CUDA settings. Supports creation and management of multiple Python virtual environments.
  • Variety of machine learning models: enables creation and training of a wide range of models, including neural networks, linear regression, logistic regression, and many others. A versatile tool for solving diverse machine learning tasks.
  • Scalability: allows training models on large datasets. This library can efficiently utilize computational resources by distributing the load across multiple processors or graphics processing units (GPUs).
  • Flexibility: provides a flexible programming interface that enables developers to create machine learning models tailored to their specific needs. Supports various levels of abstraction, from low-level control over computations to high-level APIs for rapid prototyping.
  • Visualization tools: includes tools for visualizing model structures, computation graphs, and data. This functionality helps better understand and debug models, as well as facilitates result interpretation.
  • Distributed computing support: enables distribution of computations across multiple devices, such as CPUs and GPUs, as well as between machines in a cluster. This ensures accelerated training and model output through parallel computations.
  • Integration with other libraries: can be easily integrated with other popular machine learning libraries, such as Keras, scikit-learn, and many others.
  • Comprehensive community and educational resources: an active user community and developer ecosystem, along with numerous educational resources, including documentation, courses, tutorials, and example code.

A private server with TensorFlow is designed for researchers, developers, and companies that require a secure and high-performance computing environment for developing and training machine learning models using TensorFlow. It ensures complete control over resources and data confidentiality, as well as accelerates model training through the use of modern NVIDIA GPUs.

Deployment Features

  • Installable on Ubuntu 22.04;
  • Installation time: 15-30 minutes, including OS;
  • Installs Python, TensorFlow, CUDA, and NVIDIA drivers;
  • User's home directory is /home/user;
  • System requirements: professional graphics card (NVIDIA RTX A4000/A5000 or NVIDIA H100), at least 16 GB of RAM.

Getting Started with TensorFlow After Deployment

After payment on the email registered during registration, you will receive a notification about the server being ready for use. This message will include the VPS IP address and login credentials. Server management is handled through our control panel - Invapi.

The authentication data can be found in the Info >> Tags section of the server control panel or in the email sent:

  • Login: root for administrator, user for working with TensorFlow;
  • Password: for administrator, received in an email upon server delivery; for user user, located in file /root/user_credentials.

Connecting and Initial Setup

After gaining access to the server, establish a connection to it via SSH with superuser privileges (root):

ssh root@<server_ip>

Then execute the command:

nano /root/user_credentials

After executing the command, a text file containing the user user credentials will be opened. Copy the password for the user user. Next, complete the root session and re-establish a connection to the server via SSH as the user user, using the copied password.

Once you've transitioned into the user account, activate the virtual environment venv by running:

. tensorflow.sh

Now you can begin working in the Python interpreter by running it with the command:

python

The interpreter is now ready for input and code execution.

To test the library's functionality and GPU support, you can enter the following program into the Python console:

import tensorflow as tf
print(tf.reduce_sum(tf.random.normal([1000, 1000])))
The first line imports the TensorFlow library, and the second line creates a tensor of random numbers with a size of 1000x1000 from a normal distribution, calculates the sum of its elements, and prints the result. Example output:

You can also use the training script tensorflow-2-simple-examples. Before that, you need to create a file and copy the script text into it. Example output:

Note

For detailed information on the main settings of TensorFlow, refer to the developer documentation.

Ordering a Server with TensorFlow using the API

To install this software using the API, follow these instructions.