Skip to main content
Skip to "About government"
Language selection
Français
Government of Canada /
Gouvernement du Canada
Search
Search the website
Search
Menu
Main
Menu
Jobs and the workplace
Immigration and citizenship
Travel and tourism
Business and industry
Benefits
Health
Taxes
Environment and natural resources
National security and defence
Culture, history and sport
Policing, justice and emergencies
Transport and infrastructure
Canada and the world
Money and finances
Science and innovation
You are here:
Canada.ca
Library and Archives Canada
Services
Services for galleries, libraries, archives and museums (GLAMs)
Theses Canada
Item – Theses Canada
Page Content
Item – Theses Canada
OCLC number
1294011937
Link(s) to full text
LAC copy
Author
Liu, Xiaoxu.
Title
A Spam Transformer Model for SMS Spam Detection.
Degree
MCS -- Université d'Ottawa / University of Ottawa, 2021
Publisher
[Ottawa, Ontario] : Université d'Ottawa / University of Ottawa, 2021
Description
1 online resource
Abstract
With the prosperity of the Short Message Service (SMS), the increasing number of spam messages has become a serious problem. The need to block spam messages requires us to develop new SMS spam detection technologies. The Transformer, an attention- based sequence to sequence model, has achieved excellent results in multiple different tasks recently. In this thesis, we propose a modified Transformer model for SMS spam messages detection. The evaluation of our proposed modified spam Transformer is performed on SMS Spam Collection v.1 dataset and UtkMl's Twitter Spam Detection Competition dataset, with the benchmark of multiple established classifiers such as Logistic Regression, Na ̈ıve Bayes, Random Forests, Support Vector Machine, and Long Short-Term Memory. In comparison to all other candidates, our experiments show that the proposed modified spam Transformer achieves the best results in terms of almost all selected performance criteria.
Other link(s)
ruor.uottawa.ca
hdl.handle.net
dx.doi.org
Subject
SMS spam detection
Transformer
Attention
Deep learning
Date modified:
2022-09-01