Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DataCite
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

Viral diseases have had a significant impact on millions of people worldwide. Utilizing time series forecasting methods allows for the estimation of cases, facilitating the control of disease spread and the allocation of necessary resources in medical facilities. In several cases, limited availability poses challenges for obtaining reasonable results. Furthermore, there are often several variables like hospitalizations and number of tests that significantly affect the target variable. This requires us to identify and process intervariable relations effectively for improved estimation. Toward that end, the dissertation first explores the possibility of pretrained network for estimating influenza like illness cases. We propose a novel network architecture called LLM4cast that utilizes bidirectional encoder for rich embedding extraction and pretrained TinyLlama for fine-tuning. The framework is trained in two stages. The first stage involves from diverse domains with 2.56M timesteps. The second stage involves training on domain specific datasets. The evaluations results demonstrate significant improvement compared to state of the art dataset specific models.The second section of this dissertation further expands to processing interchannel relations. We first introduce effective mechanism to process the channel dependencies for dataset specific model. This model is further updated for pretraining on multiple datasets. Specifically, we introduce a novel framework, DG4cast (Dependency Guided forecast), that integrates a dependency-guided transformer with horizon-sensitive modeling. Furthermore, we propose a subwindowing strategy to enable better capture of short-term and long-term temporal patterns and allow horizon-sensitive modeling. The framework estimated the dependency measures between input channels. Furthermore the tokenized channels and their dependencies are passed to the dependency-guided transformer, which utilizes the dependency measures to guide the learning. The results from the transformer processed through a linear layer to get the predicted values. These steps are carried out for each subwindow. The network utilizes a dedicated set of layers for each subwindow. Dynamic loss weighting is used to smoothly balance the optimization of each subwindow. The results from each subwindow are concatenated in the end to get the prediction for horizon window. Extensive experiments are run on the dataset with different dependency measures to show the effectiveness of the proposed method.

Details

PDF

Statistics

from
to
Export
Download Full History