lt-asset

5 models • 2 total models in database
Sort by:

nova-1.3b-bcr

NaNK
llama
166
6

Waffle_VLM_WebSight

WAFFLE: Multi-Modal Model for Automated Front-End Development We develope WAFFLE, a fine-tuning approach to train multi-modal LLM (MLLM) to generate HTML code from webpage screenshots or UI designs. WAFFLE uses a structure-aware attention mechanism to improve MLLMs' understanding of HTML's structure and a contrastive fine-tuning approach to align MLLMs' understanding of UI images and HTML code. Models fine-tuned with WAFFLE show up to 9.00 pp (percentage point) higher HTML match, 0.0982 higher CW-SSIM, 32.99 higher CLIP, and 27.12 pp higher LLEM on our new benchmark WebSight-Test and an existing benchmark Design2Code. Updates: 10/24/2024: Our preprint avaiable at: arXiv, huggingface 10/24/2024: Our code (keep maintaining) avaiable at: code 10/24/2024: Our fine-tuned WaffleVLMWebSight (7B), using DoRA, is released at: lt-asset/WaffleVLMWebSight Dependency - peft 0.11.1 - transformers 4.41.1 - pytorch 2.3.0 - selenium - Python 3.10.14 - deepspeed 0.14.1 - datasets 2.19.1 - beautifulsoup4 4.12.3 - accelerate 0.30.1 Render the HTML, or preview the HTML to check the correctness: License The model is built on top of VLMWebSightfinetuned. As such, users should comply with the licenses of these models. The DoRA weights we trained are integrated with the original model's weights to produce the final model. We release the final model's weights under the Apache-2.0 license.

license:bsd-3-clause-clear
113
14

Nova 6.7b Bcr

Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning Model artifact for paper, Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning (ICLR 2025) Introduction of Nova Nova is pre-trained with the language modeling objective starting from DeepSeek-Coder checkpoints, using the disassembly code from AnghaBench and C/C++ program compiled from The-Stack. This is the repository of the instruciton-tuned model of Nova that is good at binary code recovery, with 6.7B parameters. The other models in this series: - Nova-1.3b: Foundation model for binary code with 1.3B parameters. - Nova-1.3b-bcr: Nova-1.3b model further instruction-tuned for binary code recovery. - Nova-6.7b: Foundation model for binary code with 6.7B parameters. Binary Code Recovery Generation Check the example code for binary code recovery generation at examplegeneraton.py Test Case Execution Check the example code for evaluation at exampleevaluation.py

NaNK
llama
73
6

nova-6.7b

NaNK
llama
12
5

nova-1.3b

Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning Model artifact for paper, Nova: Generative Language Models for Assembly Code with Hierarchical Attention and Contrastive Learning (ICLR 2025) Introduction of Nova Nova is pre-trained with the language modeling objective starting from DeepSeek-Coder checkpoints, using the disassembly code from AnghaBench and C/C++ program compiled from The-Stack. This is the repository of the foundation model of Nova, with 1.3B parameters. The other models in this series: - Nova-1.3b-bcr: Nova-1.3b model further instruction-tuned for binary code recovery. - Nova-6.7b: Foundation model for binary code with 6.7B parameters. - Nova-6.7b-bcr: Nova-6.7b model further instruction-tuned for binary code recovery.

NaNK
llama
9
5