Skip to content Skip to sidebar Skip to footer

layoutlmv2-large-uncased-finetuned-infovqa

layoutlmv2-large-uncased-finetuned-infovqa This model is a fine-tuned version of microsoft/layoutlmv2-large-uncased on an unknown dataset. It achieves the following results on the evaluation set: Loss: 2.2207 Model description More information needed Intended uses & limitations More information needed Training and evaluation data More information needed Training procedure Training…

Ziya-Visual-14B-Chat

Ziya-Visual-14B-Chat Main Page:Fengshenbang Github: Fengshenbang-LM 姜子牙系列模型 Ziya-LLaMA-13B-v1.1 Ziya-LLaMA-13B-v1 Ziya-LLaMA-7B-Reward Ziya-LLaMA-13B-Pretrain-v1 软件依赖 pip install torch==1.12.1 tokenizers==0.13.3 git+https://github.com/huggingface/transformers 模型分类 Model Taxonomy 需求 Demand 任务 Task 系列 Series 模型 Model 参数 Parameter 额外 Extra 多模态 Multi-Modal 通用 General 姜子牙-多模态 Ziya-Visual InstructBLIP LLaMA 14B English&Chinese 使用 Usage import gradio as gr from PIL import Image import torch import random from …

madnet_keras

MADNet Keras MADNet is a deep stereo depth estimation model. Its key defining features are: It has a light-weight architecture which means it has low latency. It supports self-supervised training, so it can be conveniently adapted in the field with no training data. It's a stereo depth model, which means it's capable of…

vit-large-patch16-224

Edit model card Vision Transformer (large-sized model) Model description Intended uses & limitations How to use Training data Training procedure Preprocessing Pretraining Evaluation results BibTeX entry and citation info Vision Transformer (large-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224, and fine-tuned on ImageNet 2012…

New AI tool may help better detect, treat cancers: Study

NEW DELHI: Researchers have developed a new artificial intelligence (AI) tool that interprets medical images with unprecedented clarity and may help clinicians diagnose and better treat cancers that might otherwise go undetected. The tool, called iStar (Inferring Super-Resolution Tissue Architecture), and developed by researchers at the University of Pennsylvania, US, provides both highly detailed views…