Toggle navigation
OpenReview
.net
Login
×
Back to
NeurIPS
NeurIPS 2023 Workshop WANT Submissions
Accelerating Deep Learning using Ivy
Guillermo Sanchez-Brizuela
,
Ved Patwardhan
,
Matthew Barrett
,
Paul Anderson
,
Mustafa Hani
,
Daniel James Lenton
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Remaining-Useful-Life Prediction and Uncertainty Quantification using LSTM Ensembles for Aircraft Engines
Oishi Deb
,
Emmanouil Benetos
,
Philip Torr
Published: 28 Oct 2023, Last Modified: 05 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
LeanFlex-GKP: Advancing Hassle-Free Structured Pruning with Simple Flexible Group Count
Jiamu Zhang
,
Shaochen Zhong
,
Andrew Ye
,
Zirui Liu
,
Kaixiong Zhou
,
Xia Hu
,
Shuai Xu
,
Vipin Chaudhary
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Task Arithmetic with LoRA for Continual Learning
Rajas Chitale
,
Ankit Vaidya
,
Aditya Kane
,
Archana Santosh Ghotkar
Published: 28 Oct 2023, Last Modified: 23 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
A Quadratic Synchronization Rule for Distributed Deep Learning
Xinran Gu
,
Kaifeng Lyu
,
Sanjeev Arora
,
Jingzhao Zhang
,
Longbo Huang
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
InstaTune: Instantaneous Neural Architecture Search During Fine-Tuning
Sharath Nittur Sridhar
,
Souvik Kundu
,
Sairam Sundaresan
,
Maciej Szankin
,
Anthony Sarah
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Efficient and Approximate Per-Example Gradient Norms for Gradient Noise Scale
Gavia Gray
,
Anshul Samar
,
Joel Hestness
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Bandit-Driven Batch Selection for Robust Learning under Label Noise
Michal Lisicki
,
Graham W. Taylor
,
Mihai Nica
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Patch Gradient Descent: Training Neural Networks on Very Large Images
Deepak Gupta
,
Gowreesh Mago
,
Arnav Chavan
,
Dilip Prasad
,
Rajat Mani Thomas
Published: 28 Oct 2023, Last Modified: 28 Oct 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
CoTFormer: More Tokens With Attention Make Up For Less Depth
Amirkeivan Mohtashami
,
Matteo Pagliardini
,
Martin Jaggi
Published: 28 Oct 2023, Last Modified: 29 Nov 2023
WANT@NeurIPS 2023 Oral
Readers:
Everyone
Sparse Iso-FLOP Transformations for Maximizing Training Efficiency
Vithursan Thangarasa
,
Shreyas Saxena
,
Abhay Gupta
,
Sean Lie
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Tiny Graph Convolutional Networks with Topologically Consistent Magnitude Pruning
Hichem Sahbi
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
DAREL: Data Reduction with Losses for Training Acceleration of Real and Hypercomplex Neural Networks
Alexander Vladimirovich Demidovskij
,
Aleksei Trutnev
,
Artem Tugarev
,
Igor Salnikov
,
Stanislav Pavlov
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
DYAD: A Descriptive Yet Abjuring Density efficient approximation to linear neural network layers
Sarin Eapen Chandy
,
Varun Prashant Gangal
,
Yi Yang
,
Gabriel Maggiotti
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Efficient Parallelization Layouts for Large-Scale Distributed Model Training
Johannes Hagemann
,
Samuel Weinbach
,
Konstantin Dobler
,
Maximilian Schall
,
Gerard de Melo
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Oral
Readers:
Everyone
Something for (almost) nothing: improving deep ensemble calibration using unlabeled data
Konstantinos Pitas
,
Julyan Arbel
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
DONUT-hole: DONUT Sparsification by Harnessing Knowledge and Optimizing Learning Efficiency
Azhar Shaikh
,
Michael Cochez
,
Denis Diachkov
,
Michiel de Rijcke
,
Sahar Yousefi
Published: 28 Oct 2023, Last Modified: 17 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Improving Deep Ensembles without Communication
Konstantinos Pitas
,
Michael Arbel
,
Julyan Arbel
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Batched Low-Rank Adaptation of Foundation Models
Yeming Wen
,
Swarat Chaudhuri
Published: 28 Oct 2023, Last Modified: 29 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Embarrassingly Simple Dataset Distillation
Yunzhen Feng
,
Shanmukha Ramakrishna Vedantam
,
Julia Kempe
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Local LoRA: Memory-Efficient Fine-Tuning of Large Language Models
Oscar Key
,
Jean Kaddour
,
Pasquale Minervini
Published: 28 Oct 2023, Last Modified: 01 Dec 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Dynamic Observation Policies in Observation Cost-Sensitive Reinforcement Learning
Colin Bellinger
,
Mark Crowley
,
Isaac Tamblyn
Published: 28 Oct 2023, Last Modified: 28 Oct 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Cooperative Learning for Cost-Adaptive Inference
Xingli Fang
,
Richard M Bradford
,
Jung-Eun Kim
Published: 28 Oct 2023, Last Modified: 29 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
Early Weight Averaging meets High Learning Rates for LLM Pre-training
Sunny Sanyal
,
Atula Tejaswi Neerkaje
,
Jean Kaddour
,
Abhishek Kumar
,
sujay sanghavi
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
A foundation for exact binarized morphological neural networks
Theodore Aouad
,
Hugues Talbot
Published: 28 Oct 2023, Last Modified: 30 Nov 2023
WANT@NeurIPS 2023 Poster
Readers:
Everyone
«
‹
1
2
›
»