Assistant Professor, Faculty of Integrated Technologies
Dr Sandhya Aneja is working as Assistant Professor in Information and Communication System Engineering at Faculty of Integrated Technologies, Universiti Brunei Darussalam (UBD). Prior to Universiti Brunei Darussalam, she worked in Institute of Informatics and Communication (IIC), Delhi University, India.
She also worked at UBD-IBM Centre of the Universiti Brunei Darussalam during 2013-2015. During her work span at the Centre, she worked on two research projects. The first project was the Flood Forecasting system for Blue Gene/P (BGP) Super Computer, and second was, Community and Grid Friendly Smart Homes (CGFH). During projects, she was involved in the development and deployment of the model at BG/P as well as equipment, software developed for CGFH.
Her primary areas of interest include Wireless Networks, High-Performance Computing, Internet of things, Artificial Intelligence technologies - Machine Learning, Machine Translation, Deep Learning, Data Science, and Data analytics. A lab (UBD NSSPLab) with a high-end machine with multiple GPUs and multiple CPUs is under her which meets the demand of data size used in AI technologies.
PhD in Computer Science, Delhi University, India
MTech in Computer Applications, Indian Institute of Technology(IIT), Delhi, India
Graduate Aptitude Test of Engineering (GATE) Qualified
Internet of Things (IoT)- Autonomous vehicle under GAN training
Artificial Intelligence- Deep Learning, Natural Language Processing
There are mainly two types of networks, one is Internet and other is telecommunication network. IoT devices connect using one of these existing networks. IoT help towards the goal of the smart city, the smart nation, the smart vehicle, and others using data collection. This project proposes to run an experiment of multiple autonomous driving vehicles in two scenarios (i) ADV trained with sensor and image data i.e. without GAN (ii) ADV trained with data generated through GAN model. A real-world application to improve GAN model is proposed
AlexNet, DenseNet, Vgg, and Inception etc are few deep neural network models with a large number of layers. The gradient over the weights on input washes out with an increasing number of layers, however, accuracy improves. Each layer features are input to subsequent layers so to preserve the information extraction. What could be the optimum number of layers for each state-of-the-art datasets to improve the accuracy? Is it always possible to go for more layers? A solution for the optimum number of layers is to be explored in the project
Malay language, the language of Brunei Darussalam uses the same character set as of the English language. A project wherein data is to be formed and establish a BLEU score for Malay to the English language translation system is to explore. There are projects like- Adversarial text generation, Anomaly text sequence detection is to be studied in the context of Malay language
Google Scholar Citations
Google Scholar h-index
Google Scholar i10-index
1. Sandhya Aneja, Siti Nur Afikah Bte Abdul Mazid, Nagender Aneja, Neural Machine Translation model for University Email Application, The International Conference on Natural Language Processing, ICNLP, July 11 - July 13, 2020, Guangzhou, China
1.Neelima Gupta, Sandhya Khurana, SEEEP: Simple and Efficient End-to-End Protocol to secure Ad Hoc Networks against Wormhole Attacks, The International Conference on Wireless and Mobile Communications, ICWMC 2008, July 27 - August 1, 2008 - Athens, Greece,
2. Swati Singhal, Sandhya Aneja, Frank Liu, Lucas Villa Real, and Thomas George, IFM: A Scalable High-Resolution Flood Modeling Framework, In the international conference of Parallel Processing (Europar-2014), August 2014.
3. Sandhya Khurana, Neelima Gupta, Nagender Aneja, Reliable Ad hoc On-demand Distance Vector Routing Protocol, The International Conference on Networking, 2006, April 23-28, 2006 â€“ Mauritius,
4. Sandhya Khurana, Neelima Gupta, Nagender Aneja, Minimum Exposed Path to the Attack (MEPA) in Mobile Ad hoc Network (MANET) , The International Conference on Networking, ICN 2007, April 22 â€“ 28, 2007 - Sainte-Luce, Martinique, France,
5.Sandhya Aneja, Nagender Aneja, and Md Shohidul Islam (2018) â€œIoT Device Fingerprint using Deep Learning", The 2018 International Conference on Internet of Things and Intelligence Systems
1. University Email Corpus in the Malay Language
Machine translation has many applications such as news translation, email translation, official letter translation etc. Commercial translators, e.g. Google Translation lags in regional vocabulary and are unable to learn the bilingual text in the source and target languages within the input. A regional vocabulary-based application-oriented Neural Machine Translation (NMT) model is proposed over the data set of emails used at the University for communication over a period of three years. A state-of-the-art Sequence to- Sequence Neural Network for MLâ†’EN (Malay to English) and ENâ†’ML (English to Malay) translations is compared with Google Translate using Gated Recurrent Unit Recurrent Neural Network machine translation model with attention decoder. The low BLEU score of Google Translation in comparison to our model indicates that the application based regional models are better. The low BLEU score of English to Malay of our model and Google Translation indicates that the Malay Language has complex language features corresponding to English.
2. Graph-based Image Data of Inter-arrival Time of packets from the devices on a campus network