/AGENDA
4 Dec, 2023
8:45-9:00
9:00-9:15
9:15-9:45
9:45-11:15
REGISTRATION
Welcome From CSARRN & ICMEC
Opening Address - South Australia Police
Dancing with the devil: A case study highlighting the dangers associated with investigating child sexual abuse,
Presenting:
Stephen Patterson & Jacob O'Callaghan, Joint Anti Child Exploitation Team
Offending behaviours and experiences with the criminal justice system
Are you a cop?: Avoiding Suspicion in Internet Stings with Online Groomers
Presenting Author
Kathryn Seigfried-Spellar, Purdue University
Co-authors: Tatiana Ringenberg (lead), Julia Rayz Online enticement is a broad category of online child sexual exploitation in which a child is groomed to take sexually explicit images, meet face-to-face with someone for sexual purposes, and/or to engage in cybersex roleplay. These online offenses all involve conversations between minors and offenders in which the offenders engage in child sexual grooming. Through child sexual grooming, the adult uses deceptive trust and friendship formation to lure a minor into engaging in sexual behaviors. Grooming includes non-linear stages, which differ between contact-driven and CSAM-focused offenders. To conduct efficient and successful Internet stings, officers must understand how groomers communicate with real victims. Our previous research suggests officers are more likely to trigger doubts and suspicions in offenders than real victims. However, no prior research explores how officers trigger these suspicions and in what way the communications lexically differ from real minors. In this paper, we use thematic analysis to explore what triggers offender suspicion and how victims and officers respond to accusations from the offender. We examined 40 deidentified child sexual exploitation cases involving chats: 20 undercover officer-offender internet sting cases and 20 minor-offender cases from across the United States. In the first round of coding, we identify instances of groomers becoming suspicious of the identity of the chatter. In the second round, we code lines which appear to lead to the groomer’s suspicion. In the final round, we code how the chatter responded to the groomer’s suspicions. We then aggregate our findings into themes and compare across the victim and Internet sting chats. To reduce bias, the coders are not aware of whether the chats labeled involve a real victim, undercover officer, or any offense details (e.g., contact vs non-contact offense). This study has implications for law enforcement agencies seeking to reduce suspicion and risk assessment by offenders.
Online paedophiles and criminal justice experience: A qualitative perspective
Lead Author
Matthew Ball, Australian National University
Co-author: Roderic Broadhurst The experiences of paedophiles across three stages of the criminal justice experience (i.e., pre- during- and post-custody) will be studied using qualitative content analysis. This study uses data from 590 online forum posts from June 8, 2012, until May 1, 2023 (4,101 days; 134 months, 24 days; 11 years, 2 months, 24 days), from 206 unique users, to examine the experience(s) of paedophiles in the criminal justice system. Forum posts discuss topics such as fear of arrest, therapeutic experiences, and raise questions about the criminal justice system from investigation to experiences in prison. This study contributes to knowledge by providing insight into a ‘hidden population’ which may help inform prevention and treatment options.
Identifying digital forensic artifacts related to a hybrid risk assessment model for child sexual abuse offenders
Lead Author
Marcus Rogers, Purdue University
Co-authors: Kathryn C. Seigfried-Spellar, Nina L. Matulis, & Jacob S. Heasley Determining the risk that individuals involved in online child sexual offenses pose to re-offend or escalate their criminal behavior has been problematic. Traditional sexual offender risk measures have lower predictive validity when dealing with online child pornography offenders. Recent research argues for the need of a formalized hybrid risk assessment model that combines the current online sex offenses against children risk measures with digital forensics artifact analysis. This study is an initial step toward formalizing the hybrid risk assessment model by identifying the digital forensic artifacts that have the potential to be valid and reliable indicators of risk when assessing CSA offender risk. The current study will examine 10 closed child sexual abuse criminal investigations from the Tippecanoe County High Tech Crime Unit. The following digital forensics artifacts will be examined related to: 1) organization and context of the CSAM collection (e.g., including gender ratio, adult pornography versus child content as suggested by the CPORT scale), 2) evidence of intentional/active dissemination efforts, and 3) activities related to normalizing CSAM behaviors (e.g., membership or networking with online communities). The HTCU Commander will randomly select 10 closed CSA cases; investigation details related to probable cause, final charges, conviction, and/or offender risk will not be shared with the researchers. The research team will only examine the digital forensic artifacts from the offender’s mobile devices and computers. Evidence collected from victim’s devices will not be examined. The analysis will be used to predict whether the offender is a CSAM-only, contact, or dual offender, including the likelihood of recidivism and risk of future CSA offenses. Results will be shared with the HTCU commander and scored for accuracy. Future research recommendations and study limitations will be discussed.
Investigatory methods and approaches: Developments in biometrics
11:45-13:15
Developing automated methods to detect and match face and voice biometrics in child sexual abuse material
Lead Author
Bryce Westlake, San Jose State University
Co-authors: Russell Brewer, Thomas Swearingen, & Arun Ross Cloud-based technologies have the potential to investigatory practices – particularly those involving a large cache of digital evidence, such as for child sexual abuse. This paper assesses both the risks and benefits of using cloud-based technologies in this context. Various key risks are canvassed, including the potential for data breaches, and compliance with relevant legal frameworks. The benefits of such technologies are also elucidated, including improvements to scalability, performance, resource pooling, collaboration alongside reduced operating costs. We argue that these benefits outweigh potential risks – providing that appropriate and robust security approaches are designed and implemented. To this end, we propose a model infrastructure to ensure secure processing and storage of CSAM in the cloud, and undertake a series of penetration tests to demonstrate its effectiveness.
Understanding child sexual abuse offending and victimisation patterns through the extraction and analysis of biometrics
Lead Author
Russell Brewer, University of Adelaide
Co-authors: Bryce Westlake, Thomas Swearingen, & Arun Ross This paper demonstrates how offender/victim patterns in child sexual abuse material (CSAM) can be deducing using biometric matching. Patterns are represented as a social network which reveal identity connections across media files seized by police. Using an automated software system previously developed by the research team (the Biometric Analyser and Network Extractor), we extract, match and plot multiple biometric attributes (face and voice) from a database of CSAM videos compiled by Australian law enforcement. Additional network analysis also illustrates the extent to which people depicted within media files (both victims and offenders) are interconnected across evidence holdings. We discuss the implications of this work - in terms of offering insight into the ways this offending type is organized socially, but also its practical implications for law enforcement. Future directions for this research are also discussed.
A multi-factor knuckle and nailbed verification tool for forensic imagery analysis
Lead Author
Richard Guest, University of Kent
Co-authors: Marco Santopietro, Kathryn Seigfried-Spellar, & Stephen Elliott When engaging in child sexual grooming, offenders often send pornographic selfies to minors. They hide their faces, but their sexts often include hand, knuckle, and nail bed imagery. We present a novel biometric hand verification tool designed to identify offenders and abusers from images or videos based on biometric/forensic features extracted by hand regions. The tool harnesses the unique characteristics of an individual's hand, focusing on the region of interest of the knuckle fingerprint and the nail bed area. By employing advanced image processing and machine learning techniques, the system can match and authenticate hand component imagery against a constrained custody suite reference of a known subject. The proposed biometric hand verification tool works on both static images and videos, in the latter case selecting the best frame (in terms of resolution and orientation of the hand). The tool is embedded with selectable authentication models trained on a variety available datasets (both individually and in combination). To explore the performance and reliability of the biometric verification models, we considered several parameters, including hand orientation, distance from the camera, single or multiple fingers, architecture of the models and performance loss functions. Results showed best performance for pictures sampled from the same database and with the same image capture conditions, which combined with nail and knuckle score fusion reached high levels of reliability with error rates lower than 1%. We highlight the strength of the system and the current limitations. The authors conclude the biometric hand verification tool offers a robust solution that will operationally impact law enforcement by allowing agencies to investigate and identify offenders and abusers online more effectively.
Advances in artificial Intelligence and machine learning
14:00 - 15:30
Toward a Multimodal Machine Learning Framework to Detect Unknown child sexual abuse material
Lead Author
Cyndie Demeocq, University of East London
Co-authors: Julia Davidson & Ameer Al-nemrat Artificial Intelligence (AI) detection systems currently face challenges in effectively addressing child sexual abuse material (CSAM). While methods such as hashing or file name detection are commonly used, they have notable weaknesses, especially when the material is unknown and dissimilar to already-known CSAM from police forces or other related agencies. Another area of weakness is the understanding of the context in which the abuse occurs within an image or how the image is shared. This ongoing PhD research aims to tackle both contexts of online sexual grooming and online child sexual exploitation. Recognised for its computational capability to enhance context understanding, the AI method called multimodal machine learning, will be explored. By leveraging an adapted framework of features specific to CSAM, this method holds promise in addressing the nuanced aspects present in abusive material. Consequently, it will be a focal point of investigation for improving the detection of new and unknown instances of CSAM. The potential outcomes of this research can be applied in the development of potential social network implementations and aid law enforcement in CSAM reporting processes.
Source camera identification from images and videos using deep learning methods
Lead Author
Guru Swaroop Bennabhaktula, University of Groningen
Co-author: George Azzopardi The proliferation of illicit content involving minors being circulated illegally on the internet has become a grave concern for society. Effectively identifying the origin of these images holds paramount importance for law enforcement agencies (LEAs). Each image contains sensor noise, which can serve as a valuable piece of information regarding the specific camera device utilized in capturing such illicit content. This crucial information greatly aids LEAs in gathering vital clues that can ultimately lead to the identification and apprehension of the suspected offender. As part of the EU-funded project 4NSEEK, we developed three algorithms for camera identification using deep-learning-based methods. Firstly, we tackle the challenge of identifying images captured by the same camera device. We propose a two-part network that quantifies the likelihood of image pairs sharing forensic traces. In the initial stage, a convolutional neural network extracts the camera signature, followed by a neural network computing similarity scores between signature pairs. Analysis on the Dresden dataset, consisting of 31 camera devices, yielded an 85% accuracy rate. Secondly, we address the challenge of scene content interference in extracting reliable forensic traces. By focusing on small image regions unaffected by scene content, we extract homogeneous patches. To identify the camera model in a given image, we propose a hierarchical deep learning system to extract and classify these patches. Experimental results on the Dresden dataset demonstrate the effectiveness of our approach, achieving the best ever reported result of 99.01% accuracy with 63 devices. Lastly, we tackle video camera identification, which requires different techniques than image-based identification. Despite the unique challenges posed by platforms like YouTube and WhatsApp, we show that camera identification is still feasible. Our experiments on the VISION and QUFVD datasets achieve the best ever accuracies of 72.75% and 71.75%, respectively.
Introducing generative AI: The role of self-regulation in online communities towards virtual child sexual abuse material
Lead Author
Amy Roberts, Murdoch University
In recent years there have been substantive developments in AI and ML technology and, in turn, an increased dependency on cybercrime on such technologies. There is, however, little research on how child sexual exploitation offenders have interacted with such technology. The increase of virtual CSEM materials shared in online communities is more likely to be encountered on publicly accessible platforms, and individuals online are more likely to encounter psychologically harmful materials that are aimed to be realistic representations of CSEM (Christensen et al., 2021). As a result, this study examines how virtual CSEM offenders perceive the material they interact with and produce and the self-regulatory behaviours of these deviant individuals in online communities. Using data obtained from text-based communications in these online communities, this study uses content-based analysis to assess the self-regulation and disinhibition effects of deviant online users. The content-based analysis is a practical method of developing insight into coded behaviour between users and identifying potential pathways to the escalation of offending. This research aims to provide insight into how deviant self-regulation methods develop alongside AI and ML technology in online contexts.
Examining offender-based interventions
16:00 - 17:30
Warnings for Internet users attempting to access 'barely legal' pornography: Examining the effects of imagery on therapeutic and deterrent messages
Lead Author
Jeremy Prichard, University of Tasmania
Co-authors: Richard Wortley, Paul Watters, Joel Scanlan, Caroline Spiranovic This paper presents the third in a series of studies. Our experiment used a honeypot website that purported to contain barely legal pornography, which we treated as a proxy for CSAM. We examined whether warnings would dissuade males (18-30 years) from visiting the website. Participants (n = 474) who attempted to access the site were randomly allocated to one of four conditions. The control group went straight to the landing page (control; n = 100). The experimental groups encountered different warning messages: deterrence-themed with an image (D3; n = 117); therapeutic-themed (T1; n = 120); and therapeutic-themed with an image (T3; n = 137). We measured the click through to the site. Three quarters of the control group attempted to enter the pornography site, compared with 35% to 47% of the controls. All messages were effective: D3 (odds ratio [OR] = 5.02), T1 (OR = 4.06) and T2 (OR = 3.05). Images did not enhance warning effectiveness. We argue that therapeutic and deterrent warnings are useful for CSAM-prevention.
Help Wanted: Preliminary findings from a randomised control trial
Lead Author
Amanda Ruzicka, Moore Center for the Prevention of Child Sexual Abuse
Co-authors: Ryan Shields in place of Elizabeth Letourneau Men who want help for their sexual feelings towards children are prime candidates for behavioural and therapeutic treatment. However, these men may be unaware of available services, or highly reluctant to seek support unassisted. Identifying the unique demographic, social, and online characteristics of these men may be key for their identification and referral to relevant treatment, or for direct messaging advertising accessible support services. As such, the current paper is an exploratory study examining the factors that differentiate men who want help for their sexual feelings towards children from those who do not want help for their sexual feelings towards children. Data comprises representative, stratified random samples of men from Australia (n = 1,945), the U.S (n = 1,473), and the U.K (n = 1,506). Preliminary analyses indicate that, of those sexually attracted towards children, 29.6% of Australian men, 33.0% of U.K men, and 44.8% of U.S men, wanted help for their feelings. This study will present the bivariate statistics describing the demographic characteristics, online habits, physical and mental health, social supports, adverse childhood experiences, attitudes towards child sex abuse, and types of child sex offending, that differentiate men who do and do not want help for their sexual feelings towards children. Analyses will be conducted separately for each country, with effect sizes formally compared to identify statistically relevant differences.
Responding to harmful sexual behaviour: An examination of evidence-based reduction efforts
Lead Author
Carol Ronken, Braveheart Foundation
Co-authors: Deirdre Thompson As Director of Research at Bravehearts, Carol Ronken is passionate about ensuring the organisation is active in research, policy and legislative development that aims to respond to and reduce child sexual abuse and exploitation. She is a member of several working groups, including the Australian Centre to Counter Child Exploitation Research Working Group, and the Australian Child Rights Taskforce. Carol is also an Industry Fellow in the Centre of Justice, Queensland University of Technology.