Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
Actions
Automate any workflow
Packages
Host and manage packages
Security
Find and fix vulnerabilities
Codespaces
Instant dev environments
GitHub Copilot
Write better code with AI
Code review
Manage code changes
Issues
Plan and track work
Discussions
Collaborate outside of code
Explore
All features
Documentation
GitHub Skills
Blog
Solutions
By size
Enterprise
Teams
Startups
By industry
Healthcare
Financial services
Manufacturing
By use case
CI/CD & Automation
DevOps
DevSecOps
Resources
Topics
AI
DevOps
Security
Software Development
View all
Explore
Learning Pathways
White papers, Ebooks, Webinars
Customer Stories
Partners
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
Advanced Security
Enterprise-grade security features
GitHub Copilot
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
Reseting focus
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
huggingface
/
transformers
Public
Notifications
You must be signed in to change notification settings
Fork
26.3k
Star
132k
Code
Issues
1.1k
Pull requests
404
Actions
Projects
1
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Actions
Projects
Security
Insights
Commits
Branch selector
main
User selector
All users
Datepicker
All time
Commit History
Commits on Sep 19, 2024
Fix Llama 3 TikToken conversion (
#33538
)
pcuenca
committed
Sep 19, 2024
0c718f1
[tests] enable GemmaIntegrationTest on XPU (
#33555
)
faaany
committed
Sep 19, 2024
4d8908d
[tests] skip tests for xpu (
#33553
)
faaany
committed
Sep 19, 2024
b87755a
Uniformize kwargs for Paligemma processor and update docs (
#33571
)
yonigozlan
committed
Sep 19, 2024
f111d5b
Cache: don't throw warnings on
gemma2
when instantiating a new cache (
#33595
)
gante
committed
Sep 19, 2024
52920b5
[
Mamba2
] Move dt calculations to kernel (
#33520
)
vasqu
committed
Sep 19, 2024
b50ff59
change sequence_bias type of SequenceBiasLogitsProcessor to list, add… (
#33375
)
VladOS95-cyber
committed
Sep 19, 2024
162056a
Generate: check that
attention_mask
is 2D (
#33575
)
gante
committed
Sep 19, 2024
d9d59e7
add uniform processors for altclip + chinese_clip (
#31198
)
molbap
committed
Sep 19, 2024
413008c
fix tests with main revision and read token (
#33560
)
molbap
committed
Sep 19, 2024
4f0246e
Cache: don't show warning in forward passes when
past_key_values
is None (
#33541
)
gante
committed
Sep 19, 2024
80b774e
rag: fix CI (
#33578
)
gante
committed
Sep 19, 2024
f3b3810
VLMs: enable generation tests (
#33533
)
zucchini-nlp
and
gante
committed
Sep 19, 2024
d7975a5
Load and save video-processor from separate folder (
#33562
)
zucchini-nlp
and
amyeroberts
committed
Sep 19, 2024
e40bb48
Commits on Sep 18, 2024
Codec integration (
#33565
)
ylacombe
and
amyeroberts
committed
Sep 18, 2024
5af7d41
Fix bnb dequantization (
#33546
)
SunMarc
committed
Sep 18, 2024
6019f3f
Improve compiled RT-DETR inference speed (
#33412
)
yonigozlan
committed
Sep 18, 2024
7b1ce63
enforce original size to be a list (
#33564
)
dom-dziela
committed
Sep 18, 2024
9db963a
Return attention mask in ASR pipeline to avoid warnings (
#33509
)
Rocketknight1
committed
Sep 18, 2024
8efc06e
Pipeline: no side-effects on
model.config
and
model.generation_config
🔫 (
#33480
)
gante
committed
Sep 18, 2024
7542fac
Added support for bfloat16 to zero-shot classification pipeline (
#33554
)
umarbutler
and
Rocketknight1
committed
Sep 18, 2024
fc83a4d
Fix tests in ASR pipeline (
#33545
)
ylacombe
committed
Sep 18, 2024
f883827
fix the wandb logging issue (
#33464
)
ZIYU-DEEP
committed
Sep 18, 2024
4f1e9ba
[i18n-ur] Added README_ur.md file (
#33461
)
akkefa
committed
Sep 18, 2024
5427eaa
Fix missing head_dim in llama config from gguf model (
#33526
)
Isotr0py
committed
Sep 18, 2024
9f2b8cc
Chat template: save and load correctly for processors (
#33462
)
zucchini-nlp
and
amyeroberts
committed
Sep 18, 2024
db72894
Fix for slow the bug tokenizer adding spaces to single id decodes (
#32564
)
DuyguA
and
itazap
committed
Sep 18, 2024
52e22cb
Decorator for easier tool building (
#33439
)
aymeric-roucher
committed
Sep 18, 2024
e6d9f39
Support LLaVa-OV-Chat (
#33532
)
zucchini-nlp
committed
Sep 18, 2024
fee8651
Commits on Sep 17, 2024
fix patch_attention_mask incorrect setting which leads to the differe… (
#33499
)
sywangyi
committed
Sep 17, 2024
454a0f2
Add revision to trainer push_to_hub (
#33482
)
teamclouday
committed
Sep 17, 2024
6c051b4
Uniformize kwargs for Pixtral processor (
#33521
)
yonigozlan
committed
Sep 17, 2024
d8500cd
Fix missing
sequences_scores
in the Whisper beam search output (
#32970
)
Nik-Kras
committed
Sep 17, 2024
c29a869
fix to jamba config, asserting attention and expert offset (
#33316
)
ErezSC42
committed
Sep 17, 2024
46c2757
CI Build image - move runners (
#33530
)
glegendre01
committed
Sep 17, 2024
3476c19
Pagination
Previous
Next
You can’t perform that action at this time.