Linkedin: https://www.linkedin.com/in/icecoldmartini/
GitHub: https://github.com/IceColdMartini
Room No: 323, 3rd Floor, Faculty of Electrical & Computer Engineering (FECE) Tower – 3, MIST.
Mirpur Cantonment, Dhaka-1216, Bangladesh.
EDUCATION
Khulna University of Engineering & Technology Jan 2020 – Sept 2025
BSc in Computer Science and Engineering, CGPA: 3.86/4.00 Khulna, Bangladesh
- Advisor: Professor K. M. Azharul Hasan
- Senior-year Thesis Supervisor: Professor Sk. Imran Hossain
Notre Dame College May 2017 – July 2019
Higher Secondary Certificate, GPA: 5.00/5.00 Dhaka, Bangladesh
Ideal School & College Jan 2012 – April 2017
Secondary School Certificate, GPA: 5.00/5.00 Dhaka, Bangladesh
TEACHING EXPERIENCE
Lecturer | Military Institute of Science & Technology Nov 2025 – Present
Department of Computer Science and Engineering Dhaka, Bangladesh
- Courses Taken:
- CSE 106: Structured Programming Language Sessional
- CSE 316: Digital System Design Sessional
- CSE 364: Software Development Project - I
- CSE 444: Pattern Recognition Sessional
- EECE 280: Digital Electronics and Pulse Technique Sessional
INDUSTRY EXPERIENCE
Software Engineer | Intelsense.AI July 2025 – Present
Dhaka, Bangladesh
- Key Responsibilities
- End-to-End ASR, TTS Development, Fine-tuning, Scaling
- Handling fast inference and load balancing for local hosting
- Agentic AI, RAG Pipelines, Agent Tool Dev & Deployment
- Product Development
- UnisenseAI, a one-stop solution for an unified response system paired up with complete lead and transaction automation with large-scale recommendation models.
- SenseForm, a central privacy-protected form automation system for Bangladesh’s national banking for home banking and navigation with multilingual support.
- AirVoice, an ASR-TTS coupled system developed for improving Huawei’s Customer Service Op.
RESEARCH INTEREST
- Domain: CoreML / Statistical ML, Reinforcement Learning, LLM Alignment, LLM Reasoning, Model Compression, Knowledge Distillation
- Research Questions of Current Interest:
- Can we achieve pure multimodality for Small Language Models (SLMs)? If yes, then how?
- Can we articulate a more optimized flow to train context-rich student Small-VLMs from proprietary LLMs / VLMs?
- End-device deployable privacy-preserving SLM development in ClinicalAI setting to enable edge-computing
AWARDS & GRANTS
- Dean’s List Award | KUET Faculty of Electrical and Electronics Engineering
- Recipient of Fresher, Sophomore, Junior and Senior awards for the sessions 2019-2023
- Academic Excellence Technical Grant | KUET Department of Computer Science and Engineering
- Recipient for Sophomore, Junior, and Senior years for Sessions 2020-23
TECHNICAL SKILLS
- Model Compression & Optimization
- Quantization [INT8/FP16], Pruning [Structured/Unstructured], Knowledge Distillation, Low-RanK Factorization, Model Profiling & Benchmarking
- Advanced Python
- PyTorch, TensorFlow, Hugging Face Transformers, ONNX, Scikit-learn, NumPy, Pandas
- Multimodal AI
- Vision-Language Models, Cross-Modal Fusion, Multi-Modal Transformers, Attention Mechanisms
- MLOps & Deployment
- Docker, K8s, Model Versioning, CI/CD Pipelines, Performance Monitoring, A/B Testing
- Model Alignment & Fine-Tuning
- SFT [Supervised Fine-Tuning], DPO [Direct Preference Optimization], Knowledge Distillation for Alignment, Safety-Preserving Compression
- Healthcare AI Development
- Medical Data Pre-processing, HIPAA-Compliant Systems, Clinical Dataset Handling, Privacy-Preserving ML
