APA Style
Saksham Dewan, Hany Elgala. (2024). Towards Low Complexity VLC Systems: A Multi-Task Learning Approach. Comm&Optics Connect, 1 (Article ID: 0002). https://doi.org/10.69709/COConnect.2024.097213MLA Style
Saksham Dewan, Hany Elgala. "Towards Low Complexity VLC Systems: A Multi-Task Learning Approach". Comm&Optics Connect, vol. 1, 2024, Article ID: 0002, https://doi.org/10.69709/COConnect.2024.097213.Chicago Style
Saksham Dewan, Hany Elgala. 2024 "Towards Low Complexity VLC Systems: A Multi-Task Learning Approach." Comm&Optics Connect 1 (2024): 0002. https://doi.org/10.69709/COConnect.2024.097213.Volume 1, Article ID: 2024.0002
Saksham Dewan
sdewan@albany.edu
Hany Elgala
helgala@albany.edu
1 Department of ECE, University at Albany — SUNY, Albany, NY 12222, USA;;
* Author to whom correspondence should be addressed
Received: 20 Aug 2024 Accepted: 26 Sep 2024 Published: 28 Sep 2024
In the rapidly evolving landscape of wireless communication, visible light communication (VLC) stands out for its potential to redefine high-speed data exchange. Recently, VLC has utilized waveforms that combine multiple bitstreams in a unified physical layer, allowing for high-speed data exchange, precise localization, and robust control simultaneously. Particularly, the demodulation tasks of beacon position modulation (BPM) and beacon phase shift keying (BePSK) are central to decoding of such waveforms and pose significant computational challenges. This paper explores the application of multi-task learning (MTL) to these demodulation processes and aims at reducing the complexity associated with these tasks. By systematically developing and optimizing MTL architectures, this study introduces a sequence of models, culminating in a cross-stitch (CS) model that significantly enhances the performance and computational complexity over traditional single-task learning (STL) approaches for the demodulation of VLC waveforms. The CS model demonstrates substantial reductions in model complexity which showcase the potential of VLC waveforms in resource-limited and cost-effective applications, such as Internet-of-Things (IoT) devices. These are quantified as a 26% decrease in trainable parameters and a 10% reduction in FLOPs, compared to STL models. These advancements highlight the potential of MTL to improve the scalability and operational feasibility of VLC systems.
As wireless technologies continue to evolve, visible light communication (VLC) has emerged as a transformative solution, offering several advantages over conventional radio frequency (RF) systems. By utilizing the visible light spectrum, VLC offers advantages in speed, security, and bandwidth [1,2,3]. At the core of VLC systems lie novel waveforms that integrate multiple bitstreams into a single, unified physical layer. Due to this integration, these waveforms simultaneously enable high-speed data transmission, accurate localization, and robust control, essential for the deployment of versatile and efficient communication systems. For example, recent works [4,5,6,7] discuss the integration of multiple bitstreams within a single physical layer solution. The versatility of these waveforms stems from their ability to seamlessly blend various modulation techniques, such as beacon position modulation (BPM) and beacon phase shift keying (BePSK). The accurate demodulation of these tasks is key to decode multiple bit streams effectively. Amid these advancements, multi-task learning (MTL) has come to play a pivotal role, leveraging shared information across multiple tasks, enhancing learning efficiency and performance [8]. In this context, for complex tasks such as the demodulation of novel waveforms, MTL offers substantial benefits over traditional single-task learning (STL) approaches, which handle tasks such as BPM and BePSK demodulation independently. STL often leads to increased system complexity and resource demands due to the need for separate models for each task, resulting in redundancy and inefficiency [9]. In contrast, MTL facilitates simultaneous training [10,11,12] on related tasks enabling the optimization of multiple communication system components concurrently. This approach not only reduces complexity, but also enhances resource utilization by leveraging shared representations and synergies between tasks. Such efficiency is critical in VLC networks where real-time processing demands high responsiveness. Ultimately, MTL streamlines operations, lowers energy consumption, and boosts overall system performance, making it a superior choice for managing the intricate demodulation processes required in advanced VLC applications. This paper focuses on the design and optimization of MTL models that specifically cater to the demodulation of VLC waveforms. It explores various model architectures, evolving from initial designs to advanced configurations, aiming to significantly reduce the complexity inherent in these MTL networks. The development of each of model iteration is guided by ablation studies, as discussed in [13,14,15], that systematically evaluates the impact of various architectural features on model performance and complexity. The contributions of this study are detailed as follows: Introduction of advanced MTL models: This study presents a novel MTL architecture making use of a special architectural block within the task-specific layers, designed to efficiently handle the complexities associated with the demodulation of VLC waveforms. Systematic reduction of complexity: Through iterative enhancements and optimizations, this study demonstrates how each model variant contributes to a systematic reduction in computational complexity while maintaining high accuracy and efficiency. The rest of the paper is structured as follows: Section II discusses related work, emphasizing significant developments in MTL and its application to VLC systems. Section III outlines the VLC waveform design, detailing the integration of various modulation techniques for enhanced communication capabilities. Section IV explains the methodology, describing the evolution of the MTL models and detailing the implementation of ablation studies to refine these models. Section V presents the results, providing a comparative analysis of model performance and complexity. Finally, Section VI concludes the paper with a summary of findings and discusses potential future research directions. By addressing the complexities associated with MTL in VLC networks, this paper aims to advance the field by setting new benchmarks for efficiency and scalability in communication systems, thereby paving the way for more robust and integrated VLC deployments.
2.1. Background and Related Work MTL has been a prominent paradigm since its introduction in 1999 [8]. Significant advances have been made in MTL over the past decade, and there have been subsequent works that aim to formulate and establish relationships between tasks in an MTL network [9,10,11,16]. A novel deep convolutional neural network (CNN) called MoDANet (Multi-Task Deep Network for Joint Automatic Modulation Classification and Direction of Arrival Estimation) is introduced in [17] for simultaneously performing two tasks: automatic modulation classification (AMC) and direction of arrival (DOA) estimation of radio signals. The network architecture is designed with multiple residual modules to tackle the vanishing gradient problem, and it employs a MTL approach with a Y-shaped connection to learn shared representations between the AMC and DOA tasks. The authors claim that MoDANet is the first deep learning-based MTL model to handle the two unrelated tasks of AMC and DOA estimation simultaneously. The authors in [18] propose a novel MTL approach for joint automatic modulation classification and wireless signal classification using the RadCom datasets. It introduces the first MTL framework in the wireless communications domain to simultaneously perform modulation classification and signal classification tasks on heterogeneous radar and communication waveforms. The authors propose a MTL architecture that uses a hard parameter-sharing strategy, where the hidden layers are shared between tasks while preserving task-specific layers. This allows learning a shared representation that captures all tasks, improving generalization and learning efficiency. The authors conducted an extensive study on various hyperparameters, such as task weights, network density, and layer configurations, to arrive at a lightweight and efficient MTL model. The research [19] presents a novel approach to recognizing modulation schemes in cognitive radios using deep learning. Traditional methods for modulation recognition, while powerful, can struggle with the dynamic conditions of real-world RF environments. This paper addresses these challenges by introducing a MTL approach that utilizes a deep convolutional neural network to extract features necessary for classifying different modulation schemes. The proposed MTL approach aims to improve classification accuracy by training separate tasks for modulation classes that are commonly confused, thereby reducing overall confusion and enhancing accuracy. Studies such as [20,21,22,23] highlight the importance of leveraging the relationships between tasks to improve learning efficiency and performance across various applications. This approach aims to balance between task independence and learning common representations. Regularizers can be used to establish task relationships in MTL by enforcing shared structures or constraints across tasks, as discussed in [24,25,26]. For example, by penalizing large differences in model parameters for related tasks, a regularizer can encourage the model to learn common features or representations. This not only helps in sharing knowledge among tasks but also in preventing overfitting by promoting simpler, more generalizable models. This approach is effective in scenarios where tasks are related to each other but not identical, allowing the model to leverage the underlying commonalities for improved performance across tasks. The authors in [14] also present an effective approach to recognizing modulation schemes in cognitive radios using deep learning. Traditional methods for modulation recognition, while powerful, can struggle with the dynamic conditions of real-world RF environments. This research addresses these challenges by introducing a MTL approach that utilizes a deep CNN to extract features necessary for classifying different modulation schemes. A summary of all related works in this area has been presented in Table 1. Comparison of related works on application of MTL. 2.2. VLC Waveform Design The advancements and optimizations in VLC can be significantly enhanced by employing a unified physical layer waveform, such as the mixed-carrier communication (MCC) [4] and unified physical layer (UniPHY) waveform [5]. This waveform is unique in its ability to integrate various waveforms, meeting the multifaceted requirements of a VLC-enabled indoor flying network including simultaneous localization, dimming, control, and data transfer. This integration is pivotal for constructing waveforms that are central to robust and versatile communication capabilities within complex environments. The VLC waveform’s architectural design discussed in this paper is based on the UniPHY waveform [5] and uniquely integrates time-domain optical-orthogonal frequency division multiplexing (optical-OFDM) with modulation techniques such as BPM and BePSK, ensuring that these components do not interfere with each other [27,28]. This integration also provides comprehensive dimming support through pulse width modulation (PWM) [29]. The VLC waveform includes a series of pulses defined by a fixed amplitude ratio of the peak-to-peak voltage of the beacon waveform, with varying duty cycles proportional to the sampling amplitudes of the analog sinusoidal signal. A representation of a frame of the VLC waveform is given in Figure 1. The structure of the VLC waveform is organized into frames, each comprising a predetermined number of slots that accommodate time-series OFDM symbols. These slots are governed by L1 bits, resulting in 2 Beacon Position Modulation: This task involves modulating the position of the beacon signal within the transmission frame. Similar to pulse position modulation (PPM), by varying the position of the beacon, information is encoded spatially within the frame, allowing for enhanced data transmission capabilities and precise localization functionalities. Beacon Phase Shift Keying: In this task, the information is encoded by varying the phase of the beacon signal. This modulation technique allows for the transmission of data through phase shifts, providing an additional layer of robustness and complexity to the data communication process. The beacon’s duty cycle is variable, capable of conveying information through BePSK modulation determined by L2 bits. The sinusoidal wave’s phase directly influences the PWM pulse duty cycle within the beacon slot, creating 2 Flexible parameters of the VLC waveform. The performance of VLC systems depends on the efficient encoding and decoding of the VLC waveform, with parameters such as BPM and BePSK playing pivotal roles in defining the system’s overall functionality and performance. The complexities inherent in managing these parameters underscore the necessity for employing sophisticated MTL models designed to optimize demodulation tasks, thereby reducing complexity and enhancing system performance. 2.3. Methodology The methodology of this research involves applying MTL for the demodulation tasks of BPM and BePSK. The system model is based on the end-to-end learning framework introduced by the authors in [4]. An autoencoder is employed which takes input bits at the encoder and then modulates them using the PWM technique to provide dimming support. The modulated signal then passes through an optical channel and is received at the decoder where it is demodulated using classifiers for BPM and BePSK. This work aims at employing a MTL framework in place of the two separately trained STL models discussed in [4]. A concise depiction of the system model is given in Figure 2. A key aspect of this study’s methodology is the implementation of ablation studies, which systematically test various model configurations to pinpoint the most effective architectural elements that enhance performance and manage complexity. These studies help encompass and track the performance variations because of the number of shared and task-specific layers, adjustments in the placement and configuration of specialized layers, or other architectural modifications. The first implementation is the residual network (RN) model, inspired by the MoDANet architecture detailed in [17]. Like the MoDANet model, which utilizes multiple residual modules to counter the vanishing gradient problem and employs a Y-shaped connection for shared learning, the RN model integrates residual blocks to bolster deep learning capabilities. These blocks utilize skip connections, enhancing the gradient flow through extended networks and thereby augmenting the learning process for deep models. Following the RN model, the subsequent implementation is the hard parameter sharing (HPS) model, derived from principles detailed in [18]. This model utilizes a hard parameter-sharing strategy, distributing hidden layers across tasks while maintaining task-specific layers. This strategy promotes a shared representation that encompasses all tasks, thus enhancing generalization and learning efficiency. It seeks to achieve an ideal balance between shared learning and task-specific adaptability, potentially reducing the complexity associated with managing multiple learning tasks simultaneously. The final architectural iteration is the cross-stitch (CS) model, and is showed in Figure 3. This model incorporates the cross-stitch unit introduced in [30]. This unit facilitates an optimal blend of shared and task-specific representations by dynamically modulating the mixing of features from different tasks through a learnable linear combination parameterized by a matrix. The cross-stitch unit allows the model to effectively generalize across tasks, boosting performance by exploiting inter-task commonalities and distinctions. The ablation studies in this research are designed to reduce the models’ complexity while maintaining or enhancing their performance. Systematically modifying and evaluating each model variant helps to uncover the configurations that best balance performance with computational efficiency. This process is crucial for developing models that are not only effective in their task performance but also suitable for deployment on resource-constrained platforms, enhancing their practical applicability in real-world VLC applications. A detailed examination of the evolution of these models sets the stage for the subsequent presentation of results, which will discuss the effectiveness of these models in reducing complexity and enhancing performance comprehensively.
Model
Tasks
Architecture
MoDANet
Automatic Modulation Classification, Direction of Arrival Estimation
Deep CNN
RadCom
Modulation Classification, Signal Classification
Deep CNN
Cross-Stitch Model
Beacon Position Modulation Classification, Beacon Phase Shift Keying Classification (Demodulation Tasks)
Deep CNN
Parameter
Value(s)
BPM Orders (L1)
2, 4, 8, 16
BePSK Orders (L2)
2, 4
Dimming duty cycle (D)
50%
QAM orders (M)
4, 8, 16, 32
Number of OFDM symbols per PWM (C2)
6
Number of OFDM subcarriers per symbol (N)
64
AWGN SNR value
10 dB
For training each model, the input consists of VLC waveform frames configured for the two tasks of BPM and BePSK classification. Specifically, the input layer size matches the frame dimensions of a 4-BPM, 2-BePSK, 16-QAM waveform, featuring 6 OFDM symbols per PWM pulse, 2 PWM pulses per slot, and a total of 4 slots in a frame. The dataset includes a total of 10,000 frames, divided into an 80%-20% split for training and validation. Training is conducted on a system equipped with an Intel i5-12600KF processor @ 4.2GHz, with 32 GB of memory, and an NVIDIA RTX 3070 GPU with 8 GB VRAM. The models are trained over 20 epochs using a learning rate of 0.001, a batch size of 64, and the Adam optimizer facilitating the optimization process. These training parameters are aligned with those used in [4] to ensure consistency and enable fair comparisons between the multi-task and single-task models for the two demodulation tasks of BPM and BePSK. The performance of the RN model, HPS model, and CS model is evaluated using metrics such as accuracy, loss, and convergence rate. The performance of MTL and STL in the context of joint communication and sensing is also compared. The performance comparison among the three iterations of MTL, and STL is summarized in Table 3. Model performance comparison. * The RN model did not achieve convergence due to overfitting issues. The performance comparison across the STL, RN, HPS and CS models reveals significant variations in accuracy, loss, and convergence rates. STL models, serving as benchmarks, demonstrate high accuracy with BPM and BePSK tasks achieving 98% and 99% respectively, with rapid convergence within 6 and 7 epochs. In contrast, the RN model underperform and do not reach convergence due to the overfitting problem, with notably lower accuracy of 78% for BPM and 81% for BePSK, and higher losses of 0.27 and 0.29 respectively, indicating unstable training. Conversely, both the HPS and CS models exhibit superior performance, achieving near-perfect accuracy of 99% across tasks. These models not only match the STL in accuracy but also demonstrate comparable loss values, with the CS model slightly outperforming the HPS in the BePSK task by achieving a lower loss of 0.01 compared to 0.02. Additionally, both models converge efficiently due to the convolutional layers in both task-specific branches being effective at extracting relevant features from the waveform data. Efficient feature extraction helps to speed up the convergence and enables the model to have a rapid learning process. The CS model shows a convergence in the BePSK task at 10 epochs compared to 8 epochs for the HPS model, indicating a very minor yet acceptable trade-off between model complexity and speed of learning. The performance graphs for the CS model for different orders of BPM and BePSK are showcased in Figure 4 and Figure 5 respectively. The performance of the CS model is also shown by the results achieved in the confusion matrices for 4-BPM and 2-BePSK, as shown in Figure 6. The diagonal elements are significantly larger in count, as compared to off-diagonal elements, which validates the high accuracy attained during training. The complexity of the models, as summarized in Table 4, is quantified using two metrics: number of trainable parameters (provided by the Tensorflow model summary) and floating-point operations per second (FLOPs). A lower number in both these metrics illustrates a clear trend towards more efficient model architectures. The STL model utilizes a relatively moderate number of computational resources with 2.91 million parameters and 1.07G FLOPs, as compared to the RN model, which shows a substantial increase in complexity, requiring 9.23 million parameters and 3.61G FLOPs. This is indicative of its more resource-intensive nature that may not translate efficiently into practical applications, especially in resource-constrained environments. The HPS model significantly reduces complexity over the RN model down to 4.22 million parameters and 1.99G FLOPs, balancing performance with computational demands more effectively. The most notable improvement is observed in the CS model, which not only maintains high performance levels, which are comparable to the STL models, but it also drastically reduces the model’s resource requirements to only 2.15 million parameters and 0.96G FLOPs, making it the most efficient model in terms of both parameter count and computational overhead. This reduction in complexity is particularly beneficial for deployment in scenarios where computational resources are limited, such as mobile or embedded systems, without sacrificing the accuracy or functionality of the models. Model complexity comparison.
Model
Task
Accuracy
Loss
Convergence (Epoch)
STL
BPM
98%
0.05
6
RN
BPM
78%
0.27
- *
HPS
BPM
99%
0.02
7
CS
BPM
99%
0.02
8
Model
Trainable Parameters
FLOPs
STL
2.91M
1.07G
RN
9.23M
3.61G
HPS
4.22M
1.99G
CS
2.15M
0.96G
This research demonstrates the effectiveness of MTL architectures in reducing the computational complexity required for demodulating VLC waveforms. Notably, the CS model emerges as the most efficient, achieving comparable or superior performance metrics to the STL models while significantly minimizing both trainable parameters and computational overhead. These findings underscore the potential of MTL to enhance the scalability and efficiency of VLC systems, making it an attractive approach for real-world applications where computational resources are limited. The results suggest that the integration of advanced MTL techniques can lead to more robust and adaptable communication systems. The comprehensive evaluation of the models indicates a clear superiority of the CS model over both the STL models and its predecessors within the MTL framework. In terms of performance, the CS model matches the high accuracy bench- marks set by the STL models. It achieves this with comparatively lower losses and excels in reducing computational complexity, manifested in a marked decrease in both trainable parameters and FLOPs. It offers a nearly 26% reduction in parameters and a 10% reduction in FLOPs compared to the STL model, and even more substantial gains over the RN and HPS models. This dual advantage of high performance coupled with reduced complexity underscores the CS model’s enhanced suitability for practical applications, particularly in resource-constrained environments where efficiency and performance are paramount. Thus, the CS model showcases the potential of advanced MTL strategies to drive significant advancements in VLC and similar technologies.
The findings of this study suggest that MTL not only reduces the computational demands of complex VLC systems but also maintains high accuracy in critical demodulation tasks. This balance of efficiency and performance positions MTL as a transformative approach for future VLC applications, where computational resource limitations are significant constraints. The promising results invite further exploration into the application of MTL architectures across different domains of communication technologies. Future research could focus on adapting these MTL models for other forms of digital communication, such as RF, to explore the universal applicability of the architectural innovations.
VLC — Visible light communication CNN — Convolutional neural network IoT — Internet of Things BPM — Beacon position modulation BePSK — Beacon phase shift keying MTL — Multi-task learning STL — Single-task learning RF — Radio frequency UniPHY — Unified physical layer MCC — Mixed-carrier communication MoDANet — Multi-task Deep Network for Join Automatic Modulation Classification and Direction of Arrival Estimation AMC — Automatic modulation classification DoA — Direction of arrival PWM — Pulse width modulation OFDM — Orthogonal frequency division multiplexing RN Model — Residual network model HPS Model — Hard parameter sharing model CS Model — Cross-stitch model FLOPs — Floating-point operations per second
S.D.: Conceptualized and designed the research study, conducted experiments, generated and analyzed results, and authored the draft of the manuscript. H.E.: Provided critical revisions and substantial intellectual input throughout the research process. Continuously reviewed the draft manuscript and offered vital feedback that significantly shaped the research direction and analysis. Assisted in refining the study’s methodology and contributed to the final manuscript preparation.
The dataset for this study will be available at the following link: https://github.com/saksh-d/vlc-mtl-dataset
Not applicable.
The authors declare no conflict of interest.
No external funding was received for this research.
[1] E. Niarchou, A. C. Boucouvalas, Z. Ghassemlooy, L. N. Alves, S. Zvanovec, "Visible Light Communications for 6G Wireless Networks," in Proceedings of the 2021 Third South American Colloquium on Visible Light Communications (SACVLC), 11–12 November 2021, Toledo, Brazil, pp. 1-6.
[2] W. Jiang, B. Han, M. A. Habibi, H. D. Schotten, "The Road Towards 6G: A Comprehensive Survey" IEEE Open J. Commun. Soc., vol. 2, pp. 334-366, 2021. [Crossref]
[3] N. Chi, Y. Zhou, Y. Wei, F. Hu, "Visible Light Communication in 6G: Advances, Challenges, and Prospects" IEEE Veh. Technol. Mag., vol. 15, pp. 93-102, 2020. [Crossref]
[4] R. Ahmad, H. Elgala, S. Almajali, H. Bany Salameh, M. Ayyash, "Unified Physical-Layer Learning Framework Toward VLC-Enabled 6G Indoor Flying Networks" IEEE Internet Things J., vol. 11, pp. 5545-5557, 2024. [Crossref]
[5] R. Ahmad, D. Anwar, H. A. Bany Salameh, H. Elgala, M. Ayyash, S. Almajali, et al., "Generalized Hybrid LiFi-WiFi Uni-PHY Learning Framework Towards Intelligent UAV-based Indoor Networks" Int. J. Intell. Netw., vol. 12, p. 100345, 2024. [Crossref]
[6] A. F. Hussein, H. Elgala, "Design and Spectral Analysis of Mixed-Carrier Communication for Sixth-Generation Networks" Proc. R. Soc. A Math. Phys. Eng. Sci., vol. 476, p. 20200165, 2020. [Crossref]
[7] A. F. Hussein, D. Saha, H. Elgala, "Mixed-Carrier Communication for Technology Division Multiplexing" Electronics, vol. 10, 2021. [Crossref]
[8] R. Caruana, "Multitask Learning" Mach. Learn., vol. 28, pp. 41-75, 1997. [Crossref]
[9] S. Ruder, "An Overview of Multi-Task Learning in Deep Neural Networks" arXiv, 2017.
[10] T. Marquet, E. Oswald, "A Comparison of Multi-Task Learning and Single-Task Learning Approaches" Advances in Cryptology – ASIACRYPT 2023, vol. 13900, pp. 170-190, 2023. [Crossref]
[11] A. Argyriou, T. Evgeniou, M. Pontil, "Multi-task Feature Learning," in Proceedings of the 19th International Conference on Neural Information Processing Systems (NIPS’06), , Eds. Cambridge, MA, USA: MIT Press, 2006, pp. 41-48.
[12] T. Evgeniou, M. Pontil, "Regularized Multi-task Learning" Proc. Tenth ACM SIGKDD Int. Conf. Knowl. Discov. Data Min., 2004. [Crossref]
[13] D. P. Kingma, M. Welling, "Auto-Encoding Variational Bayes" arXiv, 2013.
[14] M. Long, Y. Cao, J. Wang, M. I. Jordan, "Learning Transferable Features with Deep Adaptation Networks" arXiv, 2015.
[15] T. Standley, A. R. Zamir, D. Chen, L. J. Guibas, J. Malik, S. Savarese, "Which Tasks Should Be Learned Together in Multi-task Learning?" arXiv, 2019.
[16] Y. Zhang, Q. Yang, "A Survey on Multi-Task Learning" IEEE Trans. Knowl. Data Eng., vol. 33, pp. 1-15, 2021. [Crossref]
[17] V.-S. Doan, T. Huynh-The, V.-P. Hoang, D.-T. Nguyen, "MoDANet: Multi-Task Deep Network for Joint Automatic Modulation Classification and Direction of Arrival Estimation" IEEE Commun. Lett., vol. 26, pp. 335-339, 2022. [Crossref]
[18] A. Jagannath, J. Jagannath, "Multi-Task Learning Approach for Modulation and Wireless Signal Classification for 5G and Beyond: Edge Deployment via Model Compression" Phys. Commun., vol. 54, p. 101793, 2022. [Crossref]
[19] O. S. Mossad, M. ElNainay, M. Torki, "Deep Convolutional Neural Network with Multi-Task Learning Scheme for Modulations Recognition," in Proceedings of the 2019 15th International Wireless Communications & Mobile Computing Conference (IWCMC), 24–28 June 2019, Tangier, Morocco, pp. -.
[20] T. Sun, L. Shi, "Understanding Task Relationships in Multi-task Learning with Attention" arXiv, 2019.
[21] O. Sener, A. Keskin, Y. Shavit, "An Active Learning Survey: Efficient Annotation Strategies for Machine Learning" arXiv, 2023.
[22] S. Gupta, S. Rana, Q. D. Phung, S. VenkateshS. Gupta, S. Rana, Q. D. Phung, S. Venkatesh, "What Shall I Share and with Whom? A Multi-Task Learning Formulation Using Multi-Faceted Task Relationships. Deakin University. Conference Contribution" Available online: https://hdl.handle.net/10536/DRO/DU:30082939. (accessed on 14 June 2024)
[23] Y. Zhang, D.-Y. Yeung, "Transfer metric learning by learning task relationships," in Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’10), Association for Computing Machinery, 25–28 July 2010, New York, NY, USA, pp. 1199-1208.
[24] Y. Zhang, D.-Y. Yeung, "A Convex Formulation for Learning Task Relationships in Multi-Task Learning" arXiv, 2012.
[25] Y. Zhang, D.-Y. Yeung, "A Regularization Approach to Learning Task Relationships in Multitask Learning" ACM Trans. Knowl. Discov. Data, vol. 8, p. 12, 2014. [Crossref]
[26] Z. Kang, K. Grauman, F. Sha, "Learning with Whom to Share in Multi-task Feature Learning," in Proceedings of the 28th International Conference on International Conference on Machine Learning, Omnipress, 28 June 2011, Madison, WI, USA, pp. 521-528.
[27] J. Armstrong, B.J.C. Schmidt, "Comparison of Asymmetrically Clipped Optical OFDM and DC-Biased Optical OFDM in AWGN" IEEE Commun. Lett., vol. 12, pp. 343-345, 2008. [Crossref]
[28] X. Zhang, T. Van Luong, P. Petropoulos, L. Hanzo, "Machine-Learning-Aided Optical OFDM for Intensity Modulated Direct Detection" J. Light. Technol., vol. 40, pp. 2357-2369, 2022. [Crossref]
[29] S. Rajagopal, R.D. Roberts, S.-K. Lim, "IEEE 802.15.7 visible light communication: modulation schemes and dimming support" IEEE Commun. Mag., vol. 50, pp. 72-82, 2012. [Crossref]
[30] I. Misra, A. Shrivastava, A. Gupta, M. Hebert, "Cross-Stitch Networks for Multi-task Learning," in Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 27–30 July 2016, Las Vegas, NV, USA, pp. 3994-4003.
Disclaimer: All statements, viewpoints, and data presented in this article are the sole responsibility of the individual author(s) and contributor(s) and do not represent those of their affiliated institutions, the publisher, the editor(s), or reviewers. The publisher and its editor(s) accept no liability for any damage to individuals or property that may result from the use of ideas, methods, instructions, or products discussed within the content.
We use cookies to improve your experience on our site. By continuing to use our site, you accept our use of cookies. Learn more