
BongoFashion: A Versatile Dataset for Real-time Detection and Classification of Bangladeshi Wearables (2024)
Abstract—In this paper, we present BongoFashion, the largest dataset of Bangladeshi clothing and fashion accessories, contain ing 11,498 images across 39 categories of commonly worn items. Our dataset captures the rich diversity of Bangladeshi wearables, including both traditional and modern attire, in real-world settings with varied backgrounds, resolutions, and poses. We trained and compared multiple object detection models, including six YOLO variants (YOLOv5 to YOLOv10) and the Detectron2 framework, in two experimental settings: one using the original dataset and another applying data augmentations to our training dataset. Our experiments revealed that augmentations improve model performance, with the YOLOv9 model achieving the highest mean average precision (mAP) of 91.7%. This dataset and our findings offer valuable resources for advancing wearable detection and classification, with promising applications in e commerce, fashion search engines, and automated clothing cate gorization systems.
Bangladeshi Wearables
Fashion Accessories
Object Detection
Clothing Detection
Clothing Classification
YOLOv9
YOLOv10
Detectron2
Deep Learning
Image Recognition

Math101: A Novel Approach to Optimizing LLMs Using Block-D and Pruning
Abstract—Recent improvements in large language models (LLMs) have significantly advanced their effectiveness in handling complex mathematical reasoning tasks. However, achieving efficient inference and managing computational costs remain major challenges, especially in resource-limited environments. This work presents an optimized framework that enhances inference speed through a dual-pruning strategy—Dominance-Based Pruning and Threshold-Based Pruning—combined with a Block Decomposition (Block-D) technique for sequential sample pro- cessing. By organizing problem-solving into manageable blocks, our method minimizes redundant computations, particularly for simpler problems. Our approach demonstrates a 4x increase in inference speed. In testing, 73% of problems were pruned using only the first block, maintaining an accuracy rate of 92% for these cases, and achieving an overall model accuracy of 84%.
Inference Optimization
Mathematical Reasoning
Bengali Math Problem-Solving
Self-Consistency
Tool-Integrated Reasoning (TIR)
Chain of Thought (CoT)
Program of Thought (PoT)