India through its Embassy in Philippines on Tuesday launched the first India-Philippine Tech Summit (IPTS) to help boost tech and digital partnership with Manila.In his speech during the IPTS in Taguig City, Indian Ambassador to Manila Shambhu Kumaran referred to the need for developing countries to develop partnerships amongst themselves —particularly in security, people-to-people ties,…
In a first in India, a school in Kerala’s Thiruvananthapuram has introduced an AI teacher, Iris. The robot, based on generative AI, and created by Maker Labs is ‘poised to redefine the learning landscape’. The AI robot will push the boundaries of ‘what’s possible’ in the field of education.
India's first AI teacher, Iris, interacts with…
Google has announced a significant update for Bard, its formidable competitor to ChatGPT. The latest update brings exciting features, including multilingual support in 40 languages, with a particular emphasis on catering to the diverse linguistic needs of users in India.
Bard can now fluently communicate in Hindi, Tamil, Telugu, Bengali, Kannada, Malayalam, Marathi, Gujarati, and Urdu.…
OpenAI witnessed a tumultuous time last month after Sam Altman was fired and went into an exile of sorts for nearly five days. Now that Altman has been reinstated, the high-stakes drama has come to an end. However, the company hailed as the pioneer of AI technologies was at a precarious spot during those five…
People Involved
LABRAK Yanis (1)
Affiliations
LIA, NLP team, Avignon University, Avignon, France.
Demo: How to use in HuggingFace Transformers Pipeline
Requires transformers: pip install transformers
from transformers import AutoTokenizer, AutoModelForSequenceClassification, TextClassificationPipeline
model_name = 'qanastek/XLMRoberta-Alexa-Intents-Classification'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
classifier = TextClassificationPipeline(model=model, tokenizer=tokenizer)
res = classifier("réveille-moi à neuf heures du matin le vendredi"…
LLaVA Model Card - PATCHED!
This is a patched version of the original model, with patches from aliencaocao applied from here.
Model details
Model type:
LLaVA is an open-source chatbot trained by fine-tuning LLM on multimodal instruction-following data.
It is an auto-regressive language model, based on the transformer architecture.
Base LLM: mistralai/Mistral-7B-Instruct-v0.2
Model date:
LLaVA-v1.6-Mistral-7B was…
Model card for Pix2Struct - Finetuned on OCR-VQA (Visual Question Answering over book covers) - large version
Table of Contents
TL;DR
Using the model
Contribution
Citation
TL;DR
Pix2Struct is an image encoder - text decoder model that is trained on image-text pairs for various tasks, including image captionning and…
caicai0205/8888800
Depth Estimation
•
Updated
Jun 3, 2023
Source link
