Author: Hao Tian; Source: Chain View
Recently, NEAR founder @ilblackdragon will appear at the NVIDIA AI Conference, making the NEAR public chain profitable The market price trend is also gratifying. Many friends are wondering, isn't NEAR chain All in doing chain abstraction? Why has it become an AI head public chain inexplicably? Next, I will share my observations and popularize some AI model training knowledge:
1) NEAR founder Illia Polosukhin has a long-term AI background , is a co-builder of the Transformer architecture. The Transformer architecture is the basic architecture for LLMs large language model training ChatGPT today, which is enough to prove that the boss of NEAR did have experience in creating and leading AI large model systems before establishing NEAR.
2) NRAR has launched NEAR Tasks at NEARCON 2023. The goal is to train and improve artificial intelligence models. Simply put, model training needs Vendors can issue task requests on the platform and upload basic data materials. Users (Taskers) can participate in answering tasks and perform manual operations such as text annotation and image recognition for data. After the task is completed, the platform will reward the user with NEAR tokens, and these manually labeled data will be used to train the corresponding AI model.
For example: the AI model needs to improve its ability to identify objects in pictures. Vendor can upload a large number of original pictures with different objects in the pictures to the Tasks platform, and then the user Manually marking the positions of objects on pictures can generate a large amount of "picture-object position" data, and AI can use this data to learn autonomously to improve picture recognition capabilities.
At first glance, NEAR Tasks just wants to socialize artificial engineering to provide basic services for AI models. Is it really that important? Add some popular science knowledge about AI models here.
Normally, a complete AI model training includes data collection, data preprocessing and annotation, model design and training, model tuning, fine-tuning, and model In the process of verification testing, model deployment, model monitoring and updating, etc., data annotation and preprocessing are the manual part, while model training and optimization are the machine part.
Obviously, most people understand that the machine part is significantly larger than the manual part. After all, it appears to be more high-tech. However, in actual situations, manual annotation is used in the entire model. Very important in training.
Manual annotation can add labels to objects (people, places, things) in images for computers to improve visual model learning; manual annotation can also add labels to speech. The content is converted into text, and specific syllables, word phrases, etc. are marked to help the computer train the speech recognition model; manual annotation can also add some emotional tags such as happiness, sadness, anger, etc. to the text, allowing artificial intelligence to enhance emotional analysis skills, etc.
It is not difficult to see that manual annotation is the basis for machine-based deep learning models. Without high-quality annotated data, the model cannot learn efficiently. If the amount of annotated data is not enough, Large, model performance will also be limited.
At present, in the field of minimally invasive AI, there are many vertical directions for secondary fine-tuning or special training based on the ChatGPT large model, which are essentially based on OpenAI data. , additionally add new data sources, especially manually labeled data, to perform model training.
For example, if a medical company wants to do model training based on medical imaging AI and provide a set of online AI consultation services for hospitals, it only needs to use a large amount of original medical imaging data Upload to the Task platform, and then allow users to annotate and complete tasks, thereby generating manually annotated data. This data will then be fine-tuned and optimized for the ChatGPT large model, turning this general AI tool into an expert in a vertical field.
However, it is obviously not enough for NEAR to become the leading AI public chain just by relying on the Tasks platform. NEAR actually also provides AI Agent services in the ecosystem. Automatically execute all user behaviors and operations on the chain, and users can freely buy and sell assets in the market with only authorization. This is somewhat similar to Intent-centric, using AI automated execution to improve the user's interactive experience on the chain. In addition, NEAR's powerful DA capabilities allow it to play a role in the traceability of AI data sources and track the validity and authenticity of AI model training data.
In short, backed by high-performance chain functions, NEAR's technical extension and narrative guidance in the direction of AI seem to be much more powerful than pure chain abstraction. .
When I was analyzing the NRAR chain abstraction half a month ago, I saw the advantages of NEAR chain performance + the team's super web2 resource integration capabilities. I never expected it. , chain abstraction has not yet become popular enough to harvest the fruits, and this wave of AI empowerment has once again amplified imagination.
Note: Long-term attention still depends on NEAR's layout and product advancement in "chain abstraction". AI will be a good bonus and bull market catalyst!