Multi-view graph contrastive learning for social recommendation

Multi-view graph contrastive learning for social recommendation


Play all audios:

Loading...

ABSTRACT With the rapid popularity of online social media, recommendation systems have increasingly harnessed social relations to enhance user-item interactions and mitigate the data


sparsity issue. Beyond social connections, the semantic relatedness among items has emerged as a crucial factor in comprehending their inherent connections. In this work, we propose a novel


Multi-view Contrastive learning framework for Social Recommendation, named MultiCSR. This framework adaptively incorporates user social networks and item knowledge graphs into modeling users


preferences within recommendation systems. To facilitate the alignment of different views, we introduce a dedicated multi-view contrastive learning process that extracts rich information


from each view and foster mutual enhancement. Extensive experiments conducted on three real-world datasets demonstrate the effectiveness of our framework over representative recommendation


methods. Furthermore, ablation studies offer a deeper understanding of the mechanisms underlying our framework. SIMILAR CONTENT BEING VIEWED BY OTHERS ENHANCED KNOWLEDGE GRAPH RECOMMENDATION


ALGORITHM BASED ON MULTI-LEVEL CONTRASTIVE LEARNING Article Open access 04 October 2024 GRAPH NEURAL NETWORK RECOMMENDATION ALGORITHM BASED ON IMPROVED DUAL TOWER MODEL Article Open access


15 February 2024 A DATA AUGMENTATION MODEL INTEGRATING SUPERVISED AND UNSUPERVISED LEARNING FOR RECOMMENDATION Article Open access 10 February 2025 INTRODUCTION Recommendation systems have


emerged as an important tool in managing information overload across a wide range of online applications, such as e-commerce1, news portals2,3, social media4, etc. The essence of a


recommendation system lies in information filtering, which efficiently sifts through a large pool of potential candidates and delivers personalized and relevant items to each individual


user. Collaborative filtering(CF)5,6,7,8,9,10,11,12,13,14, a cornerstone methodology in recommendation systems, extracts high-quality, low-dimensional embeddings from historical user-item


interactions. These embeddings are then utilized to estimate the relevance scores that predict user preferences for items. Traditionally, collaborative filtering (CF) models have relied on


matrix factorization (MF) techniques, which decompose a user-item interaction matrix into lower-dimensional latent factor matrices to uncover hidden patterns and predict user preferences.


However, the landscape of recommendation systems has evolved with the advent of graph neural networks (GNNs)15,16,17. GNN-based collaborative filtering methods have recently garnered


significant attention and have achieved state-of-the-art performance in recommendation tasks. The main idea of graph neural networks (GNNs) is to iteratively aggregate information from


neighbors and learn higher-order structural information from the stacking of multiple propagation layers. The shift towards GNNs is driven by their ability to model intricate user-item


interactions and to integrate various types of relational information, offering a more nuanced and interconnected approach to recommendation. The rise of online social media has


revolutionized the way people communicate, form relationships, share life moments, and stay informed about the latest news and trends. This shift has had a profound impact on various aspects


of daily life, influencing behaviors and preferences in unprecedented ways. Research grounded in social influence theory18,19 has demonstrated that individuals often conform to the social


norms of their peer groups or mimic the actions of admired figures or authorities. Based on these findings, traditional recommendation systems, which primarily focus on analyzing the


user-item behavioral interactions, have evolved to incorporate the social connections between users to enhance recommendation performance20,21,22,23. Moreover, the incorporation of semantic


relatedness between items has been instrumental in significantly bolstering the efficacy of recommendation systems24. SMIN25 explores the potential of jointly incorporating social- and


knowledge-aware relational structures. It designs a metapath-guided heterogeneous graph neural network that learns from different types of meta-relations, achieving superior performance over


previous methods. This advancement underscores the significance of considering semantic relatedness of items in enhancing social recommendation. Despite its success, the reliance on


manually curated meta-relations may not fully capture the depth and complexity of information inherent in multi-relational data. Then, GL-HGNN26 constructs a heterogeneous global graph to


simultaneously integrate user-user, user-item, and item-item relations, optimizing these relationships through a novel graph learning mechanism and scaling up the approach using an


anchor-based graph learner to enhance recommendation accuracy and efficiency. However, in scenarios where there are sparse links among nodes, using node similarity for graph structure


learning can introduce excessive noise, leading to unreliable recommendation outcomes. Recently, Graph Contrastive Learning has been identified as a promising approach to learn robust


representations in situations where there is a lack of sufficient graph structural observation labels. Leveraging Graph Contrastive Learning technique, HGCL27 integrates heterogeneous


relational semantics into user-item interaction modeling, with personalized augmentation via meta networks to enable adaptive knowledge transfer across different views. Similar to HGCL, our


framework is still based on Graph Contrastive Learning technique, but we utilize a novel contrastive view generation process for social recommendation, which aggregates different graph views


as the final contrastive views. Specifically, we propose a novel Multi-view Contrastive learning framework for Social Recommendation(MultiCSR), which integrates information from a multitude


of relational sources and captures the synergistic connections between these sources through mutli-view contrastive learning. MultiCSR treats the social ties among users and the semantic


links between items as two separate yet interconnected views. To capture the multifaceted nature of recommendation data, MultiCSR meticulously constructs three types of graphs: a user-user


graph to delineate social ties, an item-item graph to articulate semantic associations, and a user-item graph to model behavioral interactions. By applying LightGCN14 across these graphs,


MultiCSR extracts a comprehensive set of representations that encapsulate the social, semantic, and behavioral dimensions of information. To further refine these representations, MultiCSR


introduces adaptive adjustment mechanisms that dynamically consolidate information from the various views, creating a more nuanced and integrated understanding of user preferences and item


attributes. Moreover, the framework incorporates multi-view contrastive learning modules designed to distill and align information from the user-side and item-side auxiliary knowledge,


enhancing the system’s ability to make informed recommendations. We conduct extensive experiments that demonstrate the superior performance of our framework over established baselines in the


domain of social recommendation. The results not only validate the effectiveness of our approach but also highlight the importance of integrating and aligning information from disparate


relational sources to achieve better recommendation performance. In summary, our main contributions are as follows: * We introduce MultiCSR, an effective social recommender system that


synergizes social and item relations through multi-view contrastive learning, thereby integrating and capturing the complementary connections across different relational views. * We design


adaptive adjustment mechanisms that dynamically consolidate different views, enhancing the accuracy and effectiveness of the recommendation process. * Extensive experimental results show


that our framework achieves superior performance over the representative baselines on social recommendation. RELATED WORKS SOCIAL RECOMMENDATION Earlier methods incorporate the social


relationships into the matrix factorization architecture. For example, TrustMF20 maps users into low-dimensional latent feature spaces in terms of their trust relationship. Rising to


prominence with the advent of deep learning, this field has witnessed a significant expansion in methodologies for exploring social recommendation. Some works21,22 introduce attention


mechanisms to model the influence of social relations on recommendation systems. In particular, in light of the remarkable performance of Graph Neural Networks (GNNs) in representing


graph-based data, researchers are now focusing on designing heterogeneous graph neural networks to incorporate the intricate semantics of diverse relationships into the latent


representational space. GraphRec28 first introduce GNNs to social recommendation by modeling the social relations and behavioral interactions as distinct graphs: user-user and user-item


graphs, respectively. DiffNet29 and DiffNet++30 stimulate the recursive social diffusion process in the global social network. DGRec23 models dynamic user behaviors and context-dependent


social influence with a graph-attention neural network. Recent advancements in social recommendation have seen an integration of knowledge-aware information from the item domain. SMIN25


first introduces a metapath-guided heterogeneous graph neural network that learns from different types of meta-relations, achieving superior performance over previous methods. GL-HGNN26


constructs a heterogeneous global graph to simultaneously integrate user-user, user-item, and item-item relations, optimizing these relationships through a novel graph learning mechanism and


scaling up the approach using an anchor-based graph learner to enhance recommendation accuracy and efficiency. GNN-BASED RECOMMENDATION Traditional recommendation systems often rely on


user-item interaction data, which can be represented as a bipartite graph with users and items as nodes and their interactions as edges. Graph Neural Networks (GNNs)15,16,31 have been a


powerful tool for handling graph data, making them an ideal choice for modeling recommendation systems. Specifically, GNNs iteratively refine the representations of each node by aggregating


information from its neighborhood. By stacking multiple layers, GNNs can capture higher-order relationships within the graph data. Neural Graph Collaborative Filtering (NGCF)13 first


integrates the bipartite graph structure into the embedding process to model the high-order connectivity in user-item graph, effectively injecting the collaborative signal into the embedding


process in an explicit manner. Later, a study by LR-GCCF32 challenges the necessity of nonlinearity in NGCF, advocating for a simpler model. LightGCN14 further demonstrates that, in the


context of collaborative filtering (CF), nonlinear activation functions and additional weight matrices are not only superfluous but may also hinder model training. Specifically, LightGCN14


generates user and item embeddings by performing a linear propagation across the user-item interaction graph, and it employs the weighted sum of embeddings learned at all layers as the final


embedding. Our framework leverages LightGCN’s approach to encode the intricate relationships across three dimensions, subsequently employing contrastive learning to further enhance the


performance of the social recommendation system. In addition to these approaches, some studies have utilized knowledge graphs for recommendations. For instance, Wang et al.33 have proposed a


multi-level reasoning framework that incorporates abstract Markov Decision Processes and a Cascading Actor-Critic method. This framework aims to improve recommendation accuracy and


interpretability by efficiently transferring high-level knowledge to inform lower-level reasoning processes. CONTRASTIVE LEARNING IN RECOMMENDATION Contrastive learning (CL), a


self-supervised learning paradigm, has recently emerged as a powerful tool across multiple domains, achieving remarkable success in areas such as computer vision34,35, natural language


processing36,37, and graph learning38,39,40. Motivated by the transformative impact of contrastive learning in these domains, researchers have begun to explore its potential within the realm


of recommendation systems. The core of contrastive learning lies in the design of the contrastive views. Some approaches utilize data augmentation techniques to generate diverse views from


the original data. For example, SGL41 employs techniques, such as node dropout, edge dropout, and random walk to create contrastive views for recommendation based on user-item interactions.


CL4SRec42 designs data augmentation approaches like item crop, item mask, and item reorder to construct various views of the same sequence while preserving the underlying preferences hidden


in historical behaviors. Other methods leverage the inherent multiple views exist in the data. COTREC43, for example, exploits the session-based graph to create two views that exhibit the


internal and external connectivities of sessions. CLCRec44 treats the item content signal and the collaborative signal as two contrastive views to address the cold-start recommendation


problem. CrossCBR45 considers the bundle view and the item view as two separate views, and model their cooperative association through cross-view contrastive learning. HGCL27 integrates


heterogeneous relational semantics into user-item interaction modeling using contrastive learning, with personalized augmentation via meta networks to enable adaptive knowledge transfer


across different views. Our proposed MultiCSR model takes a multifaceted approach to recommendation learning. We treat the three fundamental types of relationships in the system, user-user,


item-item, and user-item, as separate yet interconnected views. We then use the adaptively aggregated view as the augmented contrastive views to better align the representations from


different relationships. This multi-view contrastive learning allows our model to capture the complementary information present in each relationship, leading to a more comprehensive


understanding of user preferences and item characteristics, and ultimately, to improved recommendation performance. METHODOLOGY In this section, we first introduce the notations used in our


model. Then, we present the details of MultiCSR, which consists of two components:multi-view heterogeneous graph learning and multi-view contrastive learning. An overview of MultiCSR is


shown in Fig 1. NOTATIONS In this paper, we represent the set of users as \(\mathscr {U}=\left\{ u_{1}, \cdots , u_{|\mathscr {U}|}\right\}\) and the set of items as \(\mathscr {V}=\left\{


v_{1}, \cdots , v_{|\mathscr {V}|}\right\}\). We define the user-item behavioral graph as \({G}_{ui}=\{\mathscr {U}, \mathscr {V}, \mathscr {E}_{ui}\}\), where an edge is present between a


user \(u\in \mathscr {U}\) and an item \(v\in \mathscr {V}\) if _u_ has interacted with _i_. To represent the social relationships between users, we define the user-user graph as


\({G}_{uu}=\{\mathscr {U}, \mathscr {E}_{uu}\}\), where \(\mathscr {E}_{uu}\) is the set of edges representing user connections. To model the semantic relatedness between items, We define


the item-item graph as \({G}_{ii}=\{\mathscr {V}, \mathscr {E}_{ii}\}\), where \(\mathscr {E}_{ii}\) is the set of edges representing item links. The adjacency matrices corresponding to


\({G}_{ui}\), \({G}_{uu}\), \({G}_{ii}\) are denoted as \(\mathscr {A}_{ui}\), \(\mathscr {A}_{uu}\), \(\mathscr {A}_{ii}\), respectively. Table 1 presents the symbolic representations of


our model. MULTI-VIEW HETEROGENEOUS GRAPH LEARNING Building upon the prevailing GNN-based recommendation framework, LightGCN14, we sequentially derive representations from three distinct


views: the U-I behavioral, U-U social, and I-I relational views. BEHAVIORAL-VIEW REPRESENTATION LEARNING To learn the user and item representations from the user-item behavioral view, we


first construct a user-item bipartite graph, \(\mathscr {G}_{ui}\). Then, we perform embedding propagation using LightGCN, where the propagation in the _l_+1-th layer is represented as:


$$\begin{aligned} \left\{ \begin{aligned} e_{u}^{UI(\textit{l}+1)}&=\sum _{i \in \mathscr {N}_{u}} \frac{1}{\sqrt{\left| \mathscr {N}_{u}^{UI}\right| } \sqrt{\left| \mathscr


{N}_{i}^{UI}\right| }} \textbf{e}_{i}^{UI(\textit{l})}, \\ e_{i}^{UI(\textit{l}+1)}&=\sum _{u \in \mathscr {N}_{i}} \frac{1}{\sqrt{\left| \mathscr {N}_{i}^{UI}\right| } \sqrt{\left|


\mathscr {N}_{u}^{UI}\right| }} \textbf{e}_{u}^{UI(\textit{l})}, \end{aligned} \right. \end{aligned}$$ (1) where \(e_{u}^{UI(\textit{l}+1)}, e_{i}^{UI(\textit{l}+1)} \in \mathbb {R}^d\)


represent the embeddings at the _l_+1-th layer for user _u_ and item _i_, respectively; _d_ is the hidden dimensionality; the superscript _UI_ denotes the user-item(U-I) behavioral view;


\(e_{u}^{UI(0)}\) and \(e_{i}^{UI(0)}\) are randomly initialized; \(\mathscr {N}_{u}^{UI}\) and \(\mathscr {N}_{i}^{UI}\) denote the first-hop neighbors of user _u_ and item _i_ in U-I


graph. When setting the embedding propagation to stop at layer _L_-th, we aggregate all _L_ layers’ embeddings to combine the information received from neighbors of different depths. The


final representations of the U-I behavioral view are denoted as: $$\begin{aligned} e_{u}^{UI*} =\sum _{\textit{l}=0}^{L} \frac{e_{u}^{UI(\textit{l})}}{L+1}, \quad e_{i}^{UI*} =\sum


_{\textit{l}=0}^{L} \frac{e_{i}^{UI(\textit{l})}}{L+1}, \end{aligned}$$ (2) where \(\frac{1}{L+1}\) is the weight of the _L_-layer embedding, following the setting in LightGCN14. SOCIAL-VIEW


REPRESENTATION LEARNING We first construct a user-user graph, \(\mathscr {G}_{uu}\), which serves as a foundation for modeling the social relationships between users. To control the


information flow from behavioral relation and social relation, we utilize a pre-filter with self-gating units46, which is defined as: $$\begin{aligned} e_{u}^{UU(0)} = e_{u}^{UI(0)} \odot


\sigma (e_{u}^{UI(0)} W_u + b_u), \end{aligned}$$ (3) where \(\sigma\) denotes the sigmoid function; \(\odot\) represents element-wise multiplication; \(W_u\in \mathbb {R}^{d \times d}\) and


\(b_u\in \mathbb {R}^{d \times 1}\) are the parameters to be learned. Then, we perform embedding propagation, where the propagation in the _l_+1-th layer is denoted as: $$\begin{aligned}


e_{u}^{UU(\textit{l}+1)}&=\sum _{u \in \mathscr {N}_{u}} \frac{1}{\sqrt{\left| \mathscr {N}_{u}^{UU}\right| } \sqrt{\left| \mathscr {N}_{u}^{UU}\right| }}


\textbf{e}_{u}^{UU(\textit{l})}, \end{aligned}$$ (4) where \(e_{u}^{UU(\textit{l}+1)}\in \mathbb {R}^d\) is the _l_+1-th layer’s embeddings of user _u_; _d_ is the hidden dimensionality; the


superscript _UU_ indicates the U-U social view; \(\mathscr {N}_{u}^{UU}\) denotes the first-hop neighbors of user _u_ in U-U social graph. When setting the embedding propagation to stop at


layer _L_-th, we aggregate all _L_ layers’ embeddings to combine the information received from neighbors of different depths. The final representations of the U-U social view are denoted as:


$$\begin{aligned} e_{u}^{UU*} =\sum _{\textit{l}=0}^{L} \frac{e_{u}^{UU(\textit{l})}}{L+1}. \end{aligned}$$ (5) RELATIONAL-VIEW REPRESENTATION LEARNING Similar to the social view, we first


construct an item-item graph, \(\mathscr {G}_{ii}\), which captures the semantic relatedness between items. To control the information flow from behavioral relation and item relation, we


utilize a pre-filter with self-gating units, which is defined as: $$\begin{aligned} e_{i}^{II(0)} = e_{i}^{UI(0)} \odot \sigma (e_{i}^{UI(0)} W_i + b_i), \end{aligned}$$ (6) where \(W_i\in


\mathbb {R}^{d \times d}\) and \(b_i\in \mathbb {R}^{d \times 1}\) are the parameters to be learned. Then, we perform embedding propagation, where the propagation in the _l_+1-th layer is


denoted as: $$\begin{aligned} e_{i}^{II(\textit{l}+1)}&=\sum _{i \in \mathscr {N}_{i}^{II}} \frac{1}{\sqrt{\left| \mathscr {N}_{i}^{II}\right| } \sqrt{\left| \mathscr {N}_{u}^{II}\right|


}} \textbf{e}_{i}^{II(\textit{l})}, \end{aligned}$$ (7) where \(e_{i}^{II(\textit{l}+1)}\in \mathbb {R}^d\) is the _l_+1-th layer’s embeddings of item _i_; _d_ is the hidden dimensionality;


the superscript _II_ indicates the I-I item relational view; \(\mathscr {N}_{i}^{II}\) denotes the first-hop neighbors of item _i_ in I-I item relational graph. When setting the embedding


propagation to stop at layer _L_-th, we aggregate all _L_ layers’ embeddings to combine the information received from neighbors of different depths. The final representations of the I-I item


relational view are denoted as: $$\begin{aligned} e_{i}^{II*} =\sum _{\textit{l}=0}^{L} \frac{e_{i}^{II(\textit{l})}}{L+1}. \end{aligned}$$ (8) In summary, we can learn the representations


of users and items from three views, denoted as \(E_{U}^{UI*}, E_{U}^{UU*} \in \mathbb {R}^{|\mathscr {U}| \times d}\) and \(E_{I}^{UI*}, E_{I}^{II*} \in \mathbb {R}^{|\mathscr {V}| \times


d}\), where the superscripts _UI_, _UU_, and _II_ stand for the user-item behavioral, user-user social, and item-item relational view, respectively. Thereafter, given a user _u_, we can


obtain its user-item behavioral-view and user-user social-view representations, i.e., \(e_{u}^{UI*}\) and \(e_{u}^{UU*}\), and given an item _i_, we can obtain its user-item behavioral-view


and item-item relational-view representations, i.e., \(e_{i}^{UI*}\) and \(e_{i}^{II*}\). ADAPTIVE VIEW INTEGRATION We introduce adaptive adjustment mechanisms to effectively integrate


information derived from various relational views within our recommendation framework. To adaptively fuse the representations from the user-item behavioral view and the user-user social view


for users, we design the aggregator _s_ that aggregates the behavioral user representations \(E_{U}^{UI*}\) and the social user representations \(E_{U}^{UU*}\). This fusion results in an


aggregated user embedding matrix \(E_U^{F}\), formulated as follows: $$\begin{aligned} E_U^{A}= s * E_{U}^{UU*} + (1-s) * E_{U}^{UI*}, \end{aligned}$$ (9) where \(s \in \mathbb


{R}^{|\mathscr {U}|}\) is an adaptive weight vector which balances the contributions of the social view and the behavioral view for each user. This vector assigns unique weights to different


users, enabling a nuanced aggregation of behavioral and social signals. The weight vector _s_ is modeled as a function of the social-view user representation \(E_{U}^{UU*}\), defined by:


$$\begin{aligned} s = \sigma (E_{U}^{UU*} W_s + b_s), \end{aligned}$$ (10) where \(W_s \in \mathbb {R}^{d \times 1}\) and \(b_s\) are learnable parameters to map the user-user social-view


user representation \(E_{U}^{UU*}\) into the weight score vector _s_. \(\sigma\) denotes the sigmoid activation function. Similar to the integration of user-side information, we aim to


adaptively fuse representations derived from both the user-item behavioral view and the item-item relational view. The aggregated item embedding matrix \(E_I^{F}\) is computed using the


following formula: $$\begin{aligned} E_I^{A}= w * E_{I}^{II*} + (1-w) * E_{I}^{UI*}, \end{aligned}$$ (11) where \(w \in \mathbb {R}^{|\mathscr {V}|}\) is an adaptive weight vector that


modulates the impact of the relational view and the behavioral view for each item. To model _w_, we formulate it as a function of \(E_{I}^{II*}\) as follows: $$\begin{aligned} w = \sigma


(E_{I}^{II*} W_w + b_w), \end{aligned}$$ (12) where \(W_w \in \mathbb {R}^{d \times 1}\) and \(b_w\) are the parameters to transform the item-item relational-view representations into the


weight score vector _w_. By employing such adaptive adjustment mechanisms, our framework can effectively harmonize the diverse signals from multiple relational views, leading to enriched


user and item representations, \(E_U^{A}\) and \(E_I^{A}\), that are better suited for accurate and personalized recommendation tasks. MULTI-VIEW CONTRASTIVE LEARNING We first obtain the


distinct representations for users and items from the three views. These views capture various aspects of the data and provide complementary information. To effectively integrate and


leverage this multi-faceted information, we design the multi-view contrastive learning module to align the information derived from different views. The core principle behind the


self-supervised contrastive learning is to enhance the representation affinity among different yet related views of the same object, while simultaneously increasing the representation


dispersion of different objects47. The choice of contrastive views used in the contrastive learning paradigm is of paramount importance. The contrastive views not only determine the type of


information that the model can learn but also influence the quality and utility of the learned representations. In our work, we select two sets of generated views for both users and items to


serve as the final contrastive views for training. For users, we select \(E_U^{A}\) and \(E_U^{UU*}\), while for items, we choose \(E_I^{A}\) and \(E_I^{II*}\). To achieve the alignment of


the same user/item across different views and enforce the separation of different users/items, we adopt the popular InfoNCE48 loss function. The contrastive losses can be mathematically


formulated as follows: $$\begin{aligned} \mathscr {L}^{CL}_{U}&= \frac{1}{|\mathscr {U}|} \sum _{u\in \mathscr {U}}{-log \frac{\exp (s(e_u^{A}, e_u^{UU*})/\tau )}{\sum _{u\in \mathscr


{U}}{\exp (s(e_u^{A}, e_u^{UU*})/\tau )}}}, \end{aligned}$$ (13) $$\begin{aligned} \mathscr {L}^{CL}_{I}&= \frac{1}{|\mathscr {V}|} \sum _{i\in \mathscr {V}}{-log \frac{\exp (s(e_i^{A},


e_i^{II*})/\tau )}{\sum _{i\in \mathscr {V}}{\exp (s(e_i^{A}, e_i^{II*})/\tau )}}}, \end{aligned}$$ (14) where \(\mathscr {L}^{CL}_{U}\) and \(\mathscr {L}^{CL}_{I}\) denote the multi-view


contrastive losses for users and items, respectively; _s_(, ) is the cosine similarity function; \(\tau\) is a hyper-parameter known as the temperature. Finally, we add the two mutli-view


contrastive loss together to obtain the final contrastive loss \(\mathscr {L}^{CL}\): $$\begin{aligned} \mathscr {L}^{CL} = \lambda _1 \mathscr {L}^{CL}_{U} + \lambda _2 \mathscr


{L}^{CL}_{I}, \end{aligned}$$ (15) where \(\lambda _1\) and \(\lambda _2\) are weighting hyper-parameters control the contribution of \(\mathscr {L}^{CL}_{U}\) and \(\mathscr {L}^{CL}_{I}\).


OPTIMIZATION To learn the parameters of MultiCSR, we employ the Bayesian Personalized Ranking (BPR) loss49, which is a powerful tool for learning personalized rankings in recommendation


systems. The BPR loss function works by comparing predicted scores for pairs of items: one that the user has interacted with (the positive item) and one that the user has not interacted with


(the negative item). The goal is to maximize the difference between the predicted score for the positive item and the predicted score for the negative item. In our model, the BPR loss is


defined as follows: $$\begin{aligned} \mathscr {L}^{BPR} = \sum _{(u,i^+,i^-)\in \mathscr {B}}{-ln\sigma (y_{u,i^+}^*-y_{u,i^-}^*)} \end{aligned}$$ (16) where \(\mathscr {B}\) represents the


set of triplets \((u,i^+,i^-)\) where \(u\in \mathscr {U}\), \(i^+ \in \mathscr {V}\) is a positive item where \(\mathscr {A}_{ui^+}=1\), and \(i^- \in \mathscr {V}\) is a random sampled


negative item where \(\mathscr {A}_{ui^-}=0\). The predicted score \(y_{u,i}^*\) is computed as the dot product between the combined representations of user _u_ and item _i_:


$$\begin{aligned} y_{u,i}^* = (e_u^{A} + e_u^{UU*})^T (e_i^{A} + e_i^{II*}). \end{aligned}$$ (17) \(\sigma\) is the sigmoid function. We achieve the overall loss \(\mathscr {L}\) by weighted


combing the BPR loss \(\mathscr {L}^{BPR}\), the contrastive loss \(\mathscr {L}^{CL}\), and the \(L_2\) regularization term \(||\Theta ||_2^2\): $$\begin{aligned} \mathscr {L} = \mathscr


{L}^{BPR} + \beta _1 \mathscr {L}^{CL} + \beta _2 ||\Theta ||_2^2, \end{aligned}$$ (18) where \(\Theta\) denotes the trainable parameters for regularization with the strength of \(\beta


_2\). \(\beta _1\) is a hyper-parameter which balances the BPR loss function and the final contrastive loss. EXPERIMENTS In this section, we conduct extensive experiments to assess the


efficacy of our proposed Multi-view Graph Contrastive Learning (MultiCSR) framework. Specifically, we aim to answer the following research questions: * How does our proposed MultiCSR perform


in enhancing social recommendations? * How do different components affect the performance of MultiCSR? * What is the performance of MultiCSR across diverse environments characterized by


different levels of sparsity in user interaction data and social relation data? * What is the impact of essential hyperparameters on the performance of the model? DATASETS We conduct


extensive experiments on three real-world datasets: (1) CIAO AND EPINIONS are two product review benchmarks which contain ratings for products given by users and trust relations between


users. (2) YELP contains user ratings on business venues collected from the Yelp platform as well as users’ online friendships with respect to their similar interests. The data statistics is


presented in Table 2. BASELINES We compare MultiCSR with the following representative methods for social recommendation. We list the features of compared methods in Table 3. * DGI38: We


incorporate the social information among users and semantic relatedness between items through DGI. The message passing is built based on graph convolutions. * GraphCL50: We incorporate the


social information among users and semantic relatedness between items through GraphCL. The message passing is built based on graph convolutions. * HAN51: We utilize a representative


heterogeneous graph neural network to produce user and item representations through a meta-path-based attention encoder. * SMIN25: It jointly considers social connections between users and


semantic relatedness between items to enhance the collaborative effect encoding for social recommendation. * DSL25: It enables denoised cross-view alignment between the encoded embeddings


from social and interaction views. * HGCL25: It enhances the heterogeneous graph contrastive learning with meta networks to allow the personalized knowledge transformer with adaptive


contrastive augmentation. EVALUATION PROTOCOLS We employ three evaluation metrics: Hit Ratio (HR@K), Normalized Discounted Cumulative Gain (NDCG@K), and Recall@K with (K=3,5,10,20), to


evaluate the performance of all methods. PERFORMANCE COMPARISON Table 4 presents a detailed comparison between our proposed framework, MultiCSR, with other representative baselines on three


datasets. From the table, we can observe that our MultiCSR has demonstrated exceptional performance across three distinct datasets, outperforming existing state-of-the-art methods on HR and


NDCG. For metric Recall, MultiCSR also exhibits competitive performance, securing first-place and second-place results on two out of three datasets. The superior performance of MultiCSR can


be attributed to several key designs. Firstly, the multi-view heterogeneous graph learning is specifically designed to handle the complexity of heterogeneous relations present in social


recommendation scenarios. It effectively extracts various types of information, such as user preferences, social network structures, and item attributes, to provide a nuanced understanding


of the underlying data. Secondly, the adaptive adjustment mechanisms can effectively integrate different type of information and enhance the recommendation performance. Moreover, the


multi-view contrastive learning can further enhance the model through aligning the representations derived from different views. The comprehensive evaluation of MultiCSR across multiple


datasets and metrics underscores the potential of our model as a powerful tool for social recommendation systems, capable of effectively handling the complexities of social and item


relations to enhance recommendation quality and user experience. ABLATION STUDY We conduct ablation study to demonstrate the effectiveness of the key designs of our model on Ciao and


Epinions. The results are presented in Figs. 2 and 3, respectively. To achieve this, we implement four variants as follows: * w/o _AGG_: In this variant, we eliminate the adaptive


aggregators, _s_ and _w_ from the MultiCSR model. These aggregators are crucial for the adaptive integration of representations from different views, allowing the model to dynamically weigh


the contributions of user-user, item-item, and user-item relationships. By removing this component, we aim to assess the importance of adaptive view integration in the overall performance of


the model. * w/o CL: This variant involves the removal of both the user-side and item-side contrastive losses from the MultiCSR model. The contrastive loss is a key element in our


multi-view learning framework, as it facilitates the alignment of different views and promotes a unified information flow across the model. By disabling the contrastive loss, we can measure


its impact on the model’s ability to capture the nuanced relationships within the data. * w/o U-U: We remove the user-user social graph from MultiCSR in this variant. The user-user graph is


instrumental in capturing the social influence and trust dynamics among users, which are critical for social recommendation. By ablating this component, we seek to understand the extent to


which social interactions between users enhance the recommendation quality. * w/o I-I: In this variant, we exclulde the item-item relational graph from MultiCSR. The item-item graph models


the semantic relatedness and associations between items, which is essential for capturing the underlying structure of the item space. The removal of this graph allows us to evaluate its


contribution to the model. From the results both presented in Figs. 2 and 3, we have the following findings: (1) Across the Ciao and Epinions datasets, we observed a significant performance


drop for the MultiCSR model when subjected to the four ablation variants. This observation underscores the critical role of the key design elements within our model. The adaptive


aggregators, user-user and item-item relational graphs, as well as the contrastive loss, all contribute significantly to the model’s ability to provide accurate and effective


recommendations. (2) The most substantial performance decline was noted when the contrastive loss (CL) was removed from the model (w/o CL). This finding highlights the importance of the


contrastive learning in our model’s architecture. (3) Comparing the performance drops of the w/o I-I and w/o U-U variants, it is evident that the removal of item-item relations has a more


pronounced impact on the model’s performance. This observation confirms that item-item relations are indeed a vital component for enhancing the performance of a recommendation system.


PERFORMANCE WITH DIFFERENT DATA SPARSITY DEGREES IN USER ITEM INTERACTIONS To comprehensively evaluate the effectiveness of the model in recommending items with varying sparsity levels of


user-item interactions, we conduct a thorough analysis by dividing users into five groups according to the number of their observed interactions. Since HGCL is the optimal baseline model, we


compare our model against it on the Ciao dataset and present the results in Fig. 4. The performance of each model is depicted on the left side of the y-axis using lines, while the right


side y-axis illustrates the average number of interactions in each user group using bars. These results show that our method consistently outperforms HGCL at different levels of data


sparsity. PERFORMANCE WITH DIFFERENT DATA SPARSITY DEGREES IN SOCIAL RELATIONS In order to assess the robustness of our model with respect to social networks with varying levels of sparsity,


we constructed graphs by randomly dropping edges from the original social relation graph at different ratios. We subsequently evaluated the performance of our model on these graphs


characterized by distinct sparsity levels. The results of this evaluation are presented in Fig. 5. It is observed that the model’s performance experiences a slight decline when 10 percent of


the edges are eliminated. Subsequently, the performance exhibits fluctuations within the range of 10 percent to 90 percent edge removal. Ultimately, a significant deterioration in model


performance is noted when all edges are removed. These findings indicate that our model demonstrates commendable performance in scenarios that are not extremely sparse, thereby affirming its


robustness. HYPERPARAMETER ANALYSIS We further perform hyperparameter sensitivity analysis on the ciao dataset, showing the impact of weight coefficient \(\beta _{1}\) of the final


contrastive loss, the strength \(\beta _{2}\) of the \(L_{2}\) regularization, weight coefficient \(\lambda _{1}\) of the contrastive loss for users, weight coefficient \(\lambda _{2}\) of


the contrastive loss for items, and temperature coefficient \(\tau\) in contrastive losses. The results are shown in Fig. 6. IMPACT OF WEIGHT COEFFICIENT\(\beta _{1}\). We investigated the


performance of our model when the parameter \(\beta _{1}\) is taken from {0.01, 0.1, 0.5, 1.0}, and the results is presented in the top left of Figure 6. It was observed that our model


achieved optimal recommendation performance when \(\beta _{1}\) approached 0.1. However, as this value was further increased, the performance rapidly deteriorates. This finding suggests that


incorporating additional social relations and semantic relations of items with appropriate weighting can enhance recommendation performance. IMPACT OF STRENGTH PARAMETER\(\beta _{2}\). We


analyze the performance of our model as parameter \(\beta _{2}\) varied from {0.02, 0.04, 0.06, 0.08}. The results are shown in the top right of Figure 6. The lines depict that our model


achieves better performance around 0.04. If the value of \(\beta _{2}\) is too small, the regularization impact is minimal, potentially leading to overfitting. Conversely, if \(\beta _{2}\)


is too large, the regularization effect becomes too pronounced, potentially resulting in underfitting. IMPACT OF WEIGHT COEFFICIENTS\(\lambda _{1}\)AND\(\lambda _{2}\). We generate a heatmap


to comprehensively analyze the impact of varying values of parameters \(\lambda _{1}\) and \(\lambda _{2}\) on the performance of the model. The results are shown in the bottom left of Fig.


 6. We notice that when the parameter \(\lambda _{2}\) exceeds 0.07, regardless of the value of parameter \(\lambda _{1}\), the model performance is consistently poor. The model exhibits


better performance when the parameters \(\lambda _{1}\) and \(\lambda _{2}\) are set around 0.03. IMPACT OF TEMPERATURE COEFFICIENT\(\tau\). To investigate the impact of the temperature


coefficient \(\tau\) on the performance of our model, we conducted a series of experiments varying \(\tau\) from 0.5 to 2.0. The results are shown in the bottom right of Fig. 6. The optimal


temperature coefficient is 0.5, at which the model achieves better recommendation accuracy. It indicates that the temperature coefficient value effectively balances the similarity


differences between positive and negative sample pairs, preventing over-amplification and neglect of differences. CASE STUDY Furthermore, MultiCSR demonstrates not only effective performance


but also high interpretability. To exemplify this, we consider the case of Ciao in a real-world scenario to showcase how the approach of integrating social and item relations through


multi-view contrastive learning can enhance recommendation performance. As shown in Fig. 7, for the user (ID: 1268), in addition to historical rating records, there is also social


relationship information available, which indicates other users that this user trusts. Trust relationships can be interpreted as the degree of agreement with the ratings given by other


users. Therefore, the ratings of this user and the users they trust exhibit a certain level of consistency. Consequently, we integrate the above information with traditional collaborative


filtering information adaptively using contrastive learning to obtain the more accurate embedding representation. As illustrated in Fig. 7, the other users trusted by this user all give a


high rating to the movie (ID: 6782). This preference information will be encoded into the user’s embedding representation through information fusion. In the context of the movie (ID: 6782),


the dataset contains some movies that are similar to it, such as The Flash and Batman, both of which are products of DC Comics. Consequently, users who have a specific preference for one of


these movies are highly likely to have similar preferences for the others. Therefore, this information can be utilized to enhance the item embedding representations. Finally, by leveraging


the embedding representation of the user and movie as discussed above, we can enhance the accuracy of predicting the user’s rating for this movie, thereby improving the effectiveness of the


recommendation. CONCLUSION In this paper, we propose MultiCSR, a novel framework for social recommendation that effectively incorporates social networks and item knowledge graphs. Through


heterogeneous graph learning, we construct three distinct yet complementary views for the recommendation system: user-user social, user-item behavioral, item-item relational views. These


views provide a multifaceted perspective on user preferences and item characteristics, forming the foundation for our advanced modeling techniques. Then, by harnessing the power of adaptive


adjustment mechanisms and multi-view contrastive learning, MultiCSR aligns the representations across these different views, effectively capturing the intricate interplay between social


connections and item semantics. This alignment is crucial for delivering personalized recommendations that resonate with individual user preferences. Our extensive experimental evaluation on


real-life datasets demonstrates the superior performance of MultiCSR against a diverse array of state-of-the-art recommendation models. The results highlight the importance of integrating


social and semantic information through a multi-view learning paradigm, and showcase the potential of MultiCSR as a powerful tool for building sophisticated social recommendation systems.


While our MultiCSR framework demonstrates superior performance, several limitations remain. Issues such as the cold start problem, and biases in the recommendations need further attention.


To address the cold start problem, a potential future direction involves incorporating additional information such as user profiles and item content details like images and textual


descriptions. Subsequently, developing an efficient model framework that integrates these diverse sources of information could be explored. Furthermore, it is worth investigating how to


design a model framework that incorporates multimodal knowledge from large language models. For biases in the recommendations, studying the utilization of data augmentation techniques to


generate diverse data samples in order to balance data distribution is a worthwhile endeavor. DATA AVAILABILITY All data generated or analysed during this study are included in this


published article. REFERENCES * Item-to-item collaborative filtering. Linden, G., Smith, B. & York, J. Amazon. com recommendations. _IEEE Internet computing_7, 76–80 (2003). Google


Scholar  * Liu, J., Dolan, P. & Pedersen, E. R. Personalized news recommendation based on click behavior. In _Proceedings of the 15th international conference on Intelligent user


interfaces_, 31–40 (2010). * Okura, S., Tagami, Y., Ono, S. & Tajima, A. Embedding-based news recommendation for millions of users. In _Proceedings of the 23rd ACM SIGKDD international


conference on knowledge discovery and data mining_, 1933–1942 (2017). * Guy, I., Zwerdling, N., Ronen, I., Carmel, D. & Uziel, E. Social media recommendation based on people and tags. In


_Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval_, 194–201 (2010). * Koren, Y., Bell, R. & Volinsky, C. Matrix


factorization techniques for recommender systems. _Computer_42, 30–37 (2009). Article  Google Scholar  * Rendle, S. Factorization machines. In _2010 IEEE International conference on data


mining_, 995–1000 (IEEE, 2010). * Koren, Y. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In _Proceedings of the 14th ACM SIGKDD international


conference on Knowledge discovery and data mining_, 426–434 (2008). * Hu, Y., Koren, Y. & Volinsky, C. Collaborative filtering for implicit feedback datasets. In _2008 Eighth IEEE


international conference on data mining_, 263–272 (IEEE, 2008). * Mnih, A. & Salakhutdinov, R. R. Probabilistic Matrix Factorization. In _Advances in Neural Information Processing


Systems_, vol. 20 (Curran Associates, Inc., 2007). * Takács, G., Pilászy, I., Németh, B. & Tikk, D. Scalable collaborative filtering approaches for large recommender systems. _The


Journal of Machine Learning Research_10, 623–656 (2009). Google Scholar  * He, X., Zhang, H., Kan, M.-Y. & Chua, T.-S. Fast matrix factorization for online recommendation with implicit


feedback. In _Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval_, 549–558 (2016). * He, X. et al. Neural collaborative


filtering. In _Proceedings of the 26th international conference on world wide web_, 173–182 (2017). * Wang, X., He, X., Wang, M., Feng, F. & Chua, T.-S. Neural graph collaborative


filtering. In _Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval_, 165–174 (2019). * He, X. et al. Lightgcn: Simplifying and


powering graph convolution network for recommendation. In _Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval_, 639–648 (2020). *


Kipf, T. N. & Welling, M. Semi-supervised classification with graph convolutional networks. In _International Conference on Learning Representations_ (2017). * Hamilton, W., Ying, Z.


& Leskovec, J. Inductive Representation Learning on Large Graphs. In _Advances in Neural Information Processing Systems_, vol. 30 (Curran Associates, Inc., 2017). * Veličković, P. et al.


Graph attention networks. _arXiv preprint _[SPACE]arXiv:1710.10903 (2017). * McPherson, M., Smith-Lovin, L. & Cook, J. M. Birds of a feather: Homophily in social networks. _Annual


Review of Sociology_27, 415–444 (2001). Article  Google Scholar  * Lewis, K., Gonzalez, M. & Kaufman, J. Social selection and peer influence in an online social network. _Proceedings of


the National Academy of Sciences_109, 68–72 (2012). Article  ADS  CAS  Google Scholar  * Yang, B., Lei, Y., Liu, J. & Li, W. Social collaborative filtering by trust. _IEEE Transactions


on Pattern Analysis and Machine Intelligence_39, 1633–1647. https://doi.org/10.1109/TPAMI.2016.2605085 (2017). Article  PubMed  Google Scholar  * Chen, C., Zhang, M., Liu, Y. & Ma, S.


Social attentional memory network: Modeling aspect- and friend-level differences in recommendation. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data


Mining_, WSDM ’19, 177-185, https://doi.org/10.1145/3289600.3290982 (Association for Computing Machinery, New York, NY, USA, 2019). * Chen, C. et al. An efficient adaptive transfer neural


network for social-aware recommendation. In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval_, SIGIR’19, 225-234,


https://doi.org/10.1145/3331184.3331192 (Association for Computing Machinery, New York, NY, USA, 2019). * Song, W. et al. Session-based social recommendation via dynamic graph attention


networks. In _Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining_, WSDM ’19, 555-563, https://doi.org/10.1145/3289600.3290989 (Association for Computing


Machinery, New York, NY, USA, 2019). * Yu, L., Zhang, C., Pei, S., Sun, G. & Zhang, X. Walkranker: A unified pairwise ranking model with multiple relations for item recommendation.


_Proceedings of the AAAI Conference on Artificial Intelligence_32, https://doi.org/10.1609/aaai.v32i1.11866 (2018). * Long, X. et al. Social recommendation with self-supervised metagraph


informax network. In _Proceedings of the 30th ACM International Conference on Information & Knowledge Management_, CIKM ’21, 1160-1169, https://doi.org/10.1145/3459637.3482480


(Association for Computing Machinery, New York, NY, USA, 2021). * Zhang, Y. et al. Graph learning augmented heterogeneous graph neural network for social recommendation. _ACM Transactions on


Recommender Systems_1, 1–22 (2023). Article  Google Scholar  * Chen, M. et al. Heterogeneous graph contrastive learning for recommendation. In _Proceedings of the sixteenth ACM


international conference on web search and data mining_, 544–552 (2023). * Fan, W. et al. Graph neural networks for social recommendation. In _The World Wide Web Conference_, WWW ’19,


417-426, https://doi.org/10.1145/3308558.3313488 (Association for Computing Machinery, New York, NY, USA, 2019). * Wu, L. et al. A neural influence diffusion model for social recommendation.


In _Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval_, SIGIR’19, 235-244, https://doi.org/10.1145/3331184.3331214 (Association


for Computing Machinery, New York, NY, USA, 2019). * Wu, L. et al. Diffnet++: A neural influence and interest diffusion network for social recommendation. _IEEE Trans. on Knowl. and Data


Eng._34, 4753-4766, https://doi.org/10.1109/TKDE.2020.3048414 (2022). * Zhang, C., Song, D., Huang, C., Swami, A. & Chawla, N. V. Heterogeneous graph neural network. In _Proceedings of


the 25th ACM SIGKDD international conference on knowledge discovery & data mining_, 793–803 (2019). * Chen, L., Wu, L., Hong, R., Zhang, K. & Wang, M. Revisiting graph based


collaborative filtering: A linear residual graph convolutional network approach. _In Proceedings of the AAAI conference on artificial intelligence_34, 27–34 (2020). Article  Google Scholar 


* Wang, X. et al. Multi-level recommendation reasoning over knowledge graphs with reinforcement learning. _In Proceedings of the ACM Web Conference_2022, 2098–2108 (2022). Google Scholar  *


Chen, T., Kornblith, S., Norouzi, M. & Hinton, G. A simple framework for contrastive learning of visual representations. In _International conference on machine learning_, 1597–1607


(PMLR, 2020). * Oord, A. v. d., Li, Y. & Vinyals, O. Representation learning with contrastive predictive coding. _arXiv preprint_[SPACE]arXiv:1807.03748 (2018). * Gao, T., Yao, X. &


Chen, D. Simcse: Simple contrastive learning of sentence embeddings. _arXiv preprint_[SPACE]arXiv:2104.08821 (2021). * Logeswaran, L. & Lee, H. An efficient framework for learning


sentence representations. _arXiv preprint_[SPACE]arXiv:1803.02893 (2018). * Velickovic, P. et al. _Deep graph infomax. ICLR (Poster)_2, 4 (2019). Google Scholar  * Zhu, Y. et al. Deep graph


contrastive representation learning. _arXiv preprint_[SPACE]arXiv:2006.04131 (2020). * Hassani, K. & Khasahmadi, A. H. Contrastive multi-view representation learning on graphs. In


_International conference on machine learning_, 4116–4126 (PMLR, 2020). * Wu, J. et al. Self-supervised graph learning for recommendation. In _Proceedings of the 44th International ACM SIGIR


Conference on Research and Development in Information Retrieval_, SIGIR ’21, 726-735, https://doi.org/10.1145/3404835.3462862 (Association for Computing Machinery, New York, NY, USA, 2021).


* Xie, X. et al. Contrastive learning for sequential recommendation. In _2022 IEEE 38th international conference on data engineering (ICDE)_, 1259–1273 (IEEE, 2022). * Xia, X., Yin, H., Yu,


J., Shao, Y. & Cui, L. Self-supervised graph co-training for session-based recommendation. In _Proceedings of the 30th ACM International conference on information & knowledge


management_, 2180–2190 (2021). * Wei, Y. et al. Contrastive learning for cold-start recommendation. In _Proceedings of the 29th ACM International Conference on Multimedia_, 5382–5390 (2021).


* Ma, Y., He, Y., Zhang, A., Wang, X. & Chua, T.-S. Crosscbr: Cross-view contrastive learning for bundle recommendation. In _Proceedings of the 28th ACM SIGKDD Conference on Knowledge


Discovery and Data Mining_, 1233–1241 (2022). * Yu, J. et al. Self-supervised multi-channel hypergraph convolutional network for social recommendation. In _Proceedings of the Web Conference


2021_, WWW ’21, 413-424, https://doi.org/10.1145/3442381.3449844 (Association for Computing Machinery, New York, NY, USA, 2021). * Wang, T. & Isola, P. Understanding contrastive


representation learning through alignment and uniformity on the hypersphere. In _International conference on machine learning_, 9929–9939 (PMLR, 2020). * Gutmann, M. & Hyvärinen, A.


Noise-contrastive estimation: A new estimation principle for unnormalized statistical models. In _Proceedings of the thirteenth international conference on artificial intelligence and


statistics_, 297–304 (JMLR Workshop and Conference Proceedings, 2010). * Rendle, S., Freudenthaler, C., Gantner, Z. & Schmidt-Thieme, L. Bpr: Bayesian personalized ranking from implicit


feedback. _arXiv preprint_[SPACE]arXiv:1205.2618 (2012). * You, Y. et al. Graph contrastive learning with augmentations. _Advances in neural information processing systems_33, 5812–5823


(2020). Google Scholar  * Wang, X. et al. Heterogeneous graph attention network. In _The world wide web conference_, 2022–2032 (2019). Download references ACKNOWLEDGEMENTS This research was


partially supported by Guanghua Talent Project of Southwestern University of Finance and Economics and the National Natural Science Foundation of China (72171196), and the National Social


Science Foundation of China (22XJY013), AUTHOR INFORMATION AUTHORS AND AFFILIATIONS * School of Business Administration, Faculty of Business Administration, Southwestern University of


Finance and Economics, Chengdu, 611130, China Rui Chen, Jialu Chen & Xianghua Gan Authors * Rui Chen View author publications You can also search for this author inPubMed Google Scholar


* Jialu Chen View author publications You can also search for this author inPubMed Google Scholar * Xianghua Gan View author publications You can also search for this author inPubMed Google


Scholar CONTRIBUTIONS R.C.: Conceptualization, Methodology, Investigation. J.C.: Software, Formal analysis, Investigation, Writing-original draft, Validation. X.G.: Project administration,


Supervision, Investigation, Writing-review and editing, Resources. CORRESPONDING AUTHOR Correspondence to Xianghua Gan. ETHICS DECLARATIONS COMPETING INTERESTS The authors declare no


competing interests. ADDITIONAL INFORMATION PUBLISHER’S NOTE Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. RIGHTS AND


PERMISSIONS OPEN ACCESS This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing,


distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and


indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third


party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the


article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright


holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/. Reprints and permissions ABOUT THIS ARTICLE CITE THIS ARTICLE Chen, R., Chen, J. & Gan,


X. Multi-view graph contrastive learning for social recommendation. _Sci Rep_ 14, 22643 (2024). https://doi.org/10.1038/s41598-024-73336-5 Download citation * Received: 30 April 2024 *


Accepted: 16 September 2024 * Published: 30 September 2024 * DOI: https://doi.org/10.1038/s41598-024-73336-5 SHARE THIS ARTICLE Anyone you share the following link with will be able to read


this content: Get shareable link Sorry, a shareable link is not currently available for this article. Copy to clipboard Provided by the Springer Nature SharedIt content-sharing initiative


KEYWORDS * Recommendation * Contrastive Learning * Graph neural network * Collaborative fltering