# Enterprise Social Media: Re-thinking the role of recommendation systems **Authors:** Tenshi Munasinghe, Koki Takashima ## Executive Summary This project is about rethinking the how the recommendation systems (RS) in social media should behave in order to maximize user well-being while considering the incentives of the platform provider. We referred to prior researches to argue how the existing RS increase the screen time of the users negatively impacting their well-being. While there are opt-in features to minimize the effect of this, we discussed that there is too much friction to overcome in order for users to make use of the features effectively. The discussion stemmed into the design in which such features are implemented that the providers of the biggest social media platform have little incentive to let the users spend less time on their platform. This brought us to considering decentralized platform to minimized the incentives that big tech companies have. With decentralized platform into consideration, we were left with a few options; - Modern RS that maximizes user engagement. - No personalized RS. - Highly user-customizable RS.. - Opinionated RS that prioritizes user well-being. Taking a look into the pros and cons of each of them, we concluded that Opinionated RS that prioritizes user well-being would best suit the users’ needs. However, more research is definitely needed to determine if platforms with such systems are feasible under the attention-driven economy. ## Why "Time Spent" Matters Research shows that the total duration a user spends on a platform is a primary indicator of both academic performance and mental health. ### The Academic Gap (GPA) * The Displacement Effect:*Time is a finite resource. Every hour spent on social media is an hour "displaced" from studying. * Motives Matter: Users driven by "entertainment" or "boredom" tend to spend significantly more time on platforms, which directly correlates with lower GPAs. ### The Metrics Daily time usage is proven to be correlated with the well-being of the users despite of the criticism of not being the only and complete metric. Social Profile such as gender, income, friend counts can be taken into account when considering the addiction. ## Usage patterns and influence of Social Media ### General Patterns of Usage and Prevalence Studies indicate that 90% of U.S. emerging adults (ages 18–29) use social media every day, with roughly 24% of adolescents using it "almost constantly". Within U.S. samples, a "high frequency" usage profile was identified for users spending between 61 and 70 minutes per day; this group was more likely to be composed of women and individuals with larger social networks (e.g., more Facebook friends). In other contexts, such as Vietnam, the average daily usage for the general population is approximately two hours and thirty minutes, with youth representing the most active demographic. ### Time Usage as a Mediator of Life Outcomes Research has established that daily time usage acts as a key mediator between a student's motives for using social media and their academic achievement. Specifically, increasing time spent on these platforms—regardless of the initial reason—ultimately detracts from academic performance due to the displacement of study time. Different motives lead to different patterns of time spent. Socialization and entertainment motives are associated with higher daily time usage and lower GPAs. Conversely, academic motives can actually lead to less time spent on the platform, as the app is used as a specific tool rather than a space for endless consumption ### Consequences of Excessive Time (Overuse) Evidence confirms that the overuse of recommendation systems negatively affects users' subjective well-being, often referred to as their general happiness or life satisfaction. Long-term engagement models suggest that persistent consumption is a dynamic process where interest is a limited resource; continuous use gradually exhausts the drive to engage, potentially leading to burn-out, addiction, or user "churn" (leaving the platform). Users with un clear self-perceptions found to be more susceptible to external stimuli and are therefore more likely to overuse recommendation systems. ### Removing RS reduces screentime In a massive-scale audit involving millions of accounts, researchers compared the standard algorithmic feed to a reverse-chronological feed (where personalization was removed) and found that screen time decreased across both major platforms. - Facebook: Users assigned to the chronological feed (RS removed) showed a significantly lower average daily proportional time spent compared to the algorithmic control group. - Instagram: A similar trend occurred on Instagram, where the chronological feed group recorded an average proportional time spent, while the algorithmic group remained higher. - App Usage Metrics: Direct app-use data showed that the Facebook app was used for an average of 51.83 hours in the chronological group, compared to 65.19 hours in the group where recommendation algorithms were active. But reduced screentime of users does not incentivise the platform owners. Which means, simply banning the RS is not feasible. ## Proposed Solution: The "Ideal" Recommendation System Instead of asking, "How can we keep the user online right now?" the ideal system should ask, "How can we keep the user healthy and loyal for the next five years?" Shift the goal from "Instant Clicks" to Long-Term Value. The system should detect when a user is "mindlessly scrolling" (low-value engagement) and subtly slow down the delivery of high-stimulation content. ## Features of the "Ideal" Recommendation System ### The "Battery" Approach to Interest Traditional systems treat a user's attention like an infinite resource, but the sources show that prolonged engagement depletes cognitive resources like attention and self-control. - Smart Breaks: The ideal system would act as a resource manager, learning when a user’s "interest battery" is running low. - Replenishing Interest: Instead of pushing more content when a user is tired, the algorithm would suggest a break to allow their interest to "recharge". - Long-Term Loyalty: By preventing the exhaustion that leads to "churn" (users leaving the platform permanently), the system ensures long-term habits rather than short-term binges. - Real-life examples: - YouTube’s “Take a break” reminder nudges you to pause after a chosen interval (Digital Wellbeing). (See YouTube Help: “Take a break reminder”.) - Netflix’s “Are you still watching?” prompt interrupts long autoplay sessions to confirm you’re still actively watching. - TikTok provides screen time management tools (daily limits / bedtime-style wind-down prompts) that can be used to discourage extended binge sessions. ### Prioritizing Quality Time Over Total Time The sources indicate that simply spending more time on a platform can lead to lower subjective well-being (happiness) and even damage real-life success, such as a student's GPA. - Motive-Aware Feed: The system should distinguish between goal-oriented use (like looking for information) and mindless scrolling. - Supporting Real-Life Goals: The ideal RS would prioritize high-value content (academic or educational) that helps users achieve their goals quickly, rather than using addictive features like "infinite feeds" to trap them in unproductive loops. - Real-life examples: - YouTube’s Learning/Education surfaces (and the broader “Digital Wellbeing” feature set) are examples of platforms explicitly creating modes/spaces for higher-intent usage rather than pure entertainment loops. - Instagram’s “You’re all caught up” message (and related feed affordances) is an example of trying to signal “stop points” instead of endless novelty. ### Fair Information and Safety Filters Current systems often act like a "megaphone," accidentally making certain political groups louder than others or spreading "rude" (uncivil) content to get a reaction. - Neutral Amplification: The ideal system would be regularly checked to ensure it isn't giving an unfair advantage to one political side over another. - Active Safety: It would use safety filters to proactively hide content from untrustworthy sources or posts that use aggressive language and slurs. - Promoting Variety: To prevent "echo chambers" where people only see one point of view, the system would intentionally mix in diverse perspectives and topics. - Real-life examples: - Meta (Facebook/Instagram) has used misinformation interventions like reducing distribution of content rated false by fact-checkers and applying warning labels. - X (formerly Twitter) uses Community Notes to add context to potentially misleading posts (a “credibility layer” that doesn’t rely only on the original author). - Spotify has shipped “variety injection” ideas to deliberately mix in recommendations outside a user’s usual listening patterns. ### Transparency (Giving the User the Steering Wheel) A major reason users lose trust is the opacity of algorithms—they don't know why certain things are being shown to them. - Clear Explanations: The ideal system would provide simple reasons for every recommendation (e.g., "You're seeing this because you liked a similar video yesterday"). - User Agency: It would give the user direct control over their settings, allowing them to "turn down" certain topics or "turn up" content variety whenever they choose. - Real-life examples: - TikTok’s “Why this video” explains (at a high level) which signals led to a recommendation. - Instagram provides “Why you’re seeing this” style explanations for some surfaced content (and similarly for ads). - YouTube offers explicit controls like “Not interested” / “Don’t recommend channel” to directly steer future recommendations. - TikTok/Instagram allow keyword filtering (and related controls) that let users “turn down” topics they don’t want to see. ## Why today’s “wellbeing features” don’t change the platform’s core incentives Most “healthy use” tools (break reminders, time limits, “are you still watching?”, transparency panels) are fixes layered over an engagement-optimized system. They’re usually optional or easy to override, while the default experience is still built around autoplay, infinite feeds, and rapid novelty. That means the “continue” path stays the default, and the “stop” path is something users have to opt-in. These features can reduce criticism and satisfy policy goals without changing the underlying incentive: the recommender is still optimized to maximize attention. An ideal RS would make healthy use the default objective, not an add-on. Recommendation systems are default infrastructure. They work automatically in the background, shaping what you see even if you never “opt in.” The feed is personalized, rankings adapt, and autoplay/next-item selection happens by default. Because RS is not an opt-in feature, a “healthy experience” cannot rely on opt-in tools like break reminders or time limits. Opt-in safety puts the burden on the user at the exact moment the system is designed to reduce deliberation. In practice, only highly motivated users will enable these settings, while everyone else experiences the full-strength engagement loop. So the goal should be to make the RS itself healthy by default: the ranking objective should optimize for long-term satisfaction, include natural stopping points, and reduce exploitative repetition—so users don’t have to discover hidden settings to avoid burnout. ## Alternative Direction: Decentralized Platforms (Changing the Incentives) A different way to reduce addictive recommendation dynamics is to change who controls the feed in the first place. In a decentralized platform, there is no single company that simultaneously owns the ranking algorithm and profits from maximizing attention, which weakens the structural incentive to optimize purely for watch time. Because ranking can be provided by different services or clients, users can choose between feeds (chronological, topic-based, trusted curation, or wellbeing-first) without leaving the entire social network, and harmful designs are easier to route around by switching clients or algorithms. This architectural flexibility can also improve transparency: open protocols and third-party implementations make ranking choices more visible and easier to audit. However, there exist downsides for decentralized platforms. Governance, moderation, and abuse prevention become harder, and “healthy by default” still depends on what defaults communities adopt. But it can meaningfully realign incentives and make genuine choice over recommendation behavior possible. ## So what? Why does this matter for social media? Social media is not addictive by accident: its core business model rewards platforms for maximizing attention, and recommendation systems are the mechanism that turns that incentive into an always-on, personalized feed. Because RS operates as default infrastructure,deciding what users see without any opt-in. It shapes daily habits at scale, including how long people stay, what they focus on, and how often they return. That matters because the costs of overuse are not abstract: research links heavy, motive-misaligned use with lower wellbeing and real-world performance (e.g., academic decline), and “burnout” can turn into churn, distrust, and long-term harm to both users and platforms. This is why opt-in “wellbeing settings” are not enough. If a healthy experience requires the user to actively enable breaks, limits, or controls, the burden is placed on the very moment the system is engineered to reduce deliberation. A healthier social media ecosystem therefore requires changing the default objective of the recommender itself—optimizing for long-term satisfaction, adding natural stopping points, and reducing exploitative repetition—so that ordinary users get a healthy-by-default experience rather than having to discover it in settings. Consideration regarding the "no-personalization" is a trade-off between escaping the toxic recommendation algorithm and not getting the best out of the platform's content. On the surface, this problem looks solvable by providing users ability to customize their feed. But this requirea high effort by the users which creates friction between them and the platform. We believe that the recommendation systems should exist and be designed in a way that maximizes the value provided by the platform to its users, with minimum effort from the users end. ## References 1. **Bekalu, M. A., Sato, T., & Viswanath, K. (2023).** Conceptualizing and Measuring Social Media Use in Health and Well-being Studies: Systematic Review. *J Med Internet Res, 25*(1), e43191. DOI: 10.2196/43191. 2. **Cuong, T. V., Khai, N. T., Oo, T. Z., & Józsa, K. (2025).** The Impact of Social Media Use Motives on Students’ GPA: The Mediating Role of Daily Time Usage. *Education Sciences, 15*(3), 317. DOI: 10.3390/educsci15030317. 3. **Hemans, M., & Ocansey, D. K. W. (2021).** The Impact of Recommendation System Overuse on the Subjective Wellbeing of Internet Users. *International Journal of Scientific Research in Science, Engineering and Technology, 8*(1), 239–247. DOI: 10.32628/IJSRSET218156. 4. **Saig, E., & Rosenfeld, N. (2023).** Learning to Suggest Breaks: Sustainable Optimization of Long-Term User Engagement. *Proceedings of the 40th International Conference on Machine Learning* (PMLR 202). 5. **Scott, C. F., Bay-Cheng, L. Y., Prince, M. A., Nochajski, T. H., & Collins, R. L. (2017).** Time spent online: Latent profile analyses of emerging adults’ social media use. *Computers in Human Behavior, 75*, 311–319. DOI: 10.1016/j.chb.2017.05.026. 6. **Zhou, R. (2024).** Understanding the Impact of TikTok's Recommendation Algorithm on User Engagement. *International Journal of Computer Science and Information Technology, 3*(2). DOI: https://doi.org/10.62051/ijcsit.v3n2.24.