## Clarifying our contributions
We extend our sincere gratitude to all the reviewers for their dedication in reviewing our paper. We especially acknowledge the critical insights from reviewers eCf1, k4cA and es1M concerning the FinGPT model and our paper's evaluation. We have earnestly endeavored to address the concerns within the tight timeframe allotted for rebuttal.
However, we would like to clarify that our primary contribution is neither the training of a new LLM nor its evaluation. Instead, we have centered our efforts on curating the data sources tailored for the exploration of LLMs within the finance domain. Specifically, our data contributions encompass:
1. The design and implementation of **real-time data collection and curation pipelines**, spanning 34 diverse financial data sources. This is a pioneering initiative aimed at democratizing access to internet-scale financial data.
2. The fine-tuning of existing LLMs using LoRA and reinforcement learning. Notably, FinGPT has exhibited strong performance in sentiment analysis and several pivotal financial applications, **underscoring the value and potential of our data sources**.
3. Open-sourcing our data pipelines, training codes, and the resulting models, paving the way for future research in the area.
Hence, our model training and evaluations are not meant to be exhaustive but merely illustrative of the capabilities of our data. We beleive that the data sources we have enabled are in themselves a significant contribution to the broader community.
## Response to Common Concerns
We would like to thank all the reviewers for their constructive comments and valuable feedback. We address two common concerns below.
**1. Codebase access (Reviewers y2dH, eCf1 and k4cA)**
**Misalignment between the codebase and paper is the most significant issue: As shown before, the data and functions demonstrated in the paper fail to align with the codebase well.**
We would like to address a misunderstanding and appologize for any resulting confusion. We want to clarify that the functions have indeed been implemented within our codebase. In what follows, we outline the specific locations of our key codes for your reference.
<!--
[Guoxun: Could you address this and clarify? Could you update the website/code and reply based on the website & code? Read the specific comments below]
[his work looks like a half-finished paper since many of the functions demonstrated in the paper are not implemented in their codebase]
[In particular, the most straightforward drawback of this paper is the misalignment between the paper and the code implementation. From the data perspective, this paper argues that their model is trained on 34 diverse data sources. However, in their codebase, they only publish one financial news dataset. ]
1. 【开始客气】实际上我们也很想直接将Internet Scale的数据提供给大家,但是由于法律风险原因,我们无法直接提供数据,但是在[FinNLP](https://github.com/AI4Finance-Foundation/Finnlp)库中我们接通了以上提到的数据源,并按照[News](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_News/),[Social Media](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_Social_Media/),[Company Announcement](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_Company_Announcement/), 在FinNLP库的[网站](https://ai4finance-foundation.github.io/FinNLP)分别给出了使用我们的接口获取数据的方式。
2. 对于那个发布的dataset, 实际上正如我们论文所提到的,是通过ChatGPT进行标注的。我们认为利用较为强大的ChatGPT结合一些Prompt Engineering的方式进行数据的标注是在资源和时间都较为有限的情况下来实现特定任务instruction finetuning的数据准备最好的方式之一。
[From the code perspective, this paper claims that they can support several applications, such as robo advisor. While in their codebase, the robo advisor is completed by calling OpenAI API.]
1. 一方面,简单的认为calling OpanAI的API做的制作Robo-advisor是不可行或者没有贡献不是太明智。实际上有许多较为成功的应用或者案例都是建立在openai的基础上的,如AutoGPT, ChatPDF等,他们的成功在于正确的使用prompt engineering来引导ChatGPT给出他们想要的结果。而在我们的例子中也是如此。
2. 另一方面,我们也承认如果能用真实的投资顾问与用户的对话数据来微调模型当然是最好的。然而这类数据通常都是公司的关键数据,我们也做过类似的尝试,但是没有公司愿意分享他们的数据,因为这涉及到商业机密。而作为开源项目,收集足以微调模型的样本量的数据几乎是不太可能的,因此现阶段利用prompt engineering的方式来使LLM数据我们想要的结果是较为可行的方式。
[The format of this repo is good, but lacking implementation details, such like datasets and code.]
* 我们项目的组成是这样子的,首先 FinGPT 是一个playground 或者 zoo,我们会在上面放上一些LLM在金融领域应用的例子,同时包括一些可能可以使用的资源,而 FinNLP 则是 FinGPT项目背后的代码库,FinGPT中所使用的数据接口还有benchmark等都在这个库中,因此两者有机的结合在了一起形成了良好的互补关系
[The paper writing can be improved by adding more technical details, for example, the size of instructions used in their work is even not mentioned.]
* 这点我们将改进(参考这边[link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3#%E2%85%B3-train--test-set))
[They provide a fancy GitHub repository, including their current work and future plans. The format of this repo is good, but lacking implementation details, such like datasets and code.]
-->
* Data Sources
* News
* Yahoo: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/yahoo_streaming.py)
* Reuters: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/reuters_streaming.py)
* Seeking Alpha: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/seekingalpha_date_range.py)
* Penny Stocks: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/pennystocks_streaming.py)
* Market Watch: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/marketwatch_streaming.py)、[Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/marketwatch_date_range.py)
* Tip Ranks: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/tipranks_streaming.py)
* The Fly: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/thefly_streaming.py)
* Talk Markets: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/talkmarkets_streaming.py)
* Alliance News: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/alliancenews_streaming.py)
* Guru Focus: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/gurufocus_streaming.py)
* Investor Place: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/investorplace_streaming.py)
* FMP: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/fmp_streaming.py)
* Sina: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/sina_finance_date_range.py)
* Eastmoney: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/eastmoney_streaming.py)
* Yicai: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/yicai_streaming.py)
* CCTV: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/akshare_cctv.py)
* Tushare: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/tushare_major_news.py)
* FinnHub: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/finnhub_date_range.py)
* CNBC: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/news/cnbc_streaming.py)
* Social Media
* Twitter: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/twitter_date_range.py)
* Reddit: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/reddit_streaming.py)
* Weibo: [Link (Date Range)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/weibo_date_range.py)、[Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/weibo_streaming.py)
* Xueqiu: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/xueqiu_streaming.py)
* Facebook: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/facebook_streaming.py)
* StockTwits: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/stocktwits_streaming.py)
* Eestmoney: [Link (Streaming)](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/social_media/eastmoney_streaming.py)
* Company Announcement
* SEC: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/company_announcement/sec.py)
* Juchao: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/company_announcement/juchao.py)
* Research Dataset
* Stocknet: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* CHRNN: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* TTE: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* Astock: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* FiQA SA: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* FPB: [Link](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/finnlp/data_sources/datasets/load_dataset.py)
* Trained models: [Link](https://huggingface.co/oliverwang15/FinGPT_v32_Llama2_Sentiment_Instruction_LoRA_FT)
* Applications
* Application I: Robo-Advisor
* The example provided in our submission: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/chatgpt-robo-advisor-v1)
* An additional example that deals with the filings instead of the news: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/chatgpt-robo-advisor-v2)
* Application II: Quantitative Trading
* Comparision with LLaMA / Labeling by Market: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v1)
* Comparison with BloombergGPT / Labeling by LLMs: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3)
* Application III: Low-code Development
* Example 1: Development Factors: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/chatgpt-low-code-development-v1)
* Example 2: Finding New Factors: [Link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/chatgpt-low-code-development-v2)
We respectfully request the reviewers to reevaluate the access to our codebase based on the information provided above. Furthermore, we have enhanced our documentation on our website at https://ai4finance-foundation.github.io/FinNLP to ensure better clarity.
**2. Insufficient evaluation (Reviewers eCf1, k4cA, and es1M)**
We have run additional experiments to demonstrate the potential of FinGPT. Specifally, instead of using the market to label data, we prompt ChatGPT to label data with seven categories:
```
What is the sentiment of this news?
{news}
Please choose an answer from {strong negative/moderately negative/mildly
negative/neutral/mildly positive/moderately positive/strong positive},
then provide some short reasons.
```
Then we use the labels for fine-tuning. We also evaluate on two more datasets, Twitter Financial News Sentiment (TFNS) , and News With GPT Instruction (NWGI). The results are summarized below.
| **Weighted F1** | BloombergGPT | ChatGPT | GPT-4 | ChatGLM2 | Llama2 | FinGPT |
| --------------- | :----------: | :-----: | :---: | :--------------------------------------------: | :--------------------------------------------------------: | :---------------------------------------------------------------------------------: |
| FPB [1] | 0.511 | 0.781 | 0.833 | 0.381 | 0.390 | **0.850** |
| FiQA-SA [2] | 0.751 | 0.730 | 0.630 | 0.790 | 0.800 | **0.860** |
| TFNS [3] | - | 0.736 | 0.808 | 0.189 | 0.296 | **0.894** |
| NWGI [4] | - | - | - | 0.449 | 0.503 | **0.632** |
| Mean | | | | 0.452 | 0.497 | **0.809** |
| Std | | | | 0.217 | 0.189 | 0.103 |
| **ACC/F1 Micro**| | | | | | |
| FPB [1] | - | 0.781 | 0.834 | 0.464 | 0.462 | **0.851** |
| FiQA-SA [2] | - | 0.662 | 0.545 | 0.822 | 0.822 | **0.844** |
| TFNS [3] | - | 0.731 | 0.813 | 0.331 | 0.386 | **0.894** |
| NWGI [4] | - | - | - | 0.560 | 0.583 | **0.636** |
| Mean | | | | 0.544 | 0.563 | **0.806** |
| Std | | | | 0.180 | 0.165 | 0.100 |
| **Macro F1** | | | | | | |
| FPB [1] | - | 0.770 | 0.827 | 0.487 | 0.517 | **0.840** |
| FiQA-SA [2] | - | 0.611 | 0.539 | 0.560 | 0.610 | **0.752** |
| TFNS [3] | - | 0.693 | 0.758 | 0.340 | 0.401 | **0.866** |
| NWGI [4] | - | - | - | 0.489 | 0.539 | **0.644** |
| Mean | | | | 0.469 | 0.517 | **0.776** |
| Std | | | | 0.080 | 0.075 | 0.087 |
FinGPT shows consistent advantages. We have added these results in Section 6.2.3 of the updated manuscript.
## Ethics Review from Reviewer nkoo
**Authors should careful discuss differences between this paper and the other, and what their contribution is.**
We wish to express our gratitude to the Ethics reviewer for the careful review of our paper. We would like to clarify the difference between our work and the Arxiv paper (https://arxiv.org/pdf/2306.06031.pdf). Our work's focus is on the financial data sources. It also provides evaluation of FinGPT to showcase what can be achieved with our data. In contrast, the Arxiv paper is a concise **vision paper** written by our team, aiming to discuss the future direction of FinGPT. The Arxiv paper is only for communication purpose and **will not be published**.
At the time of our submission, the Arxiv paper was not uploaded, hence it was not cited. To address this, we have explicitly referenced it and discussed the differences in Section 2 of our revised manuscript (highlighted in blue).
## Reviewer y2dH
Thank you for the valuable feedback! Please find our responses below.
**I am unclear about the proposed use of this as a robo-advisor.**
A robo-advisor is used to provide financial consultation and advice. It is a relatively objective and rational digital assistant that assists users in offering reasonable investment analysis and recommendations based on their investment habits. In our paper, we showcase how FinGPT can function as a robo-advisor, offering suggestions based on news. We have added more decription of this task in Section 6.1 of the revised manuscript (highlighted in blue). Please let us know if you have additional questions.
**There are minor typographical errors in the checklist - eg: the section number is missing**
Thank you for pointing this out! We have fixed the section number in the revised manuscript.
**I assume the code and data will be open-sourced eventually?**
We have already open-sourced our data preprocessing code, training code, and the pretrained model on https://github.com/AI4Finance-Foundation/FinGPT and https://github.com/AI4Finance-Foundation/FinNLP. Furthermore, we plan to gradually release sample data along with comprehensive data descriptions on our website. Please refer the Response to Common Concerns for the specific code links.
**The authors do mention that they do not intend for FinGPT to offer financial advice, but then I do not understand why that is one of the proposed applications.**
We do not offer financial advice, as we neither provide advisory services nor deploy models for offering recommendations. Our role to share our code and model weights, empowering users to deploy their own models for proving financial advice.
Allowing users to deploy the model independently can help circumvent legal complications. Typically, only entities with legal licenses can offer investment advice. As such, our open-sourced project neither provides nor will offer any APIs for financial advice, in compliance with legal regulations.
Please let us know if you have addtional questions.
**The authors have not shared a snapshot of the data itself in the main paper, or the data schema at the very least. This could have been beneficial for the reader in understanding the contributions.**
Thank you for the valuable suggestion! We have provided example data snapshots in our website: [News](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_News/), [Social Media](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_Social_Media/), [Filings](https://ai4finance-foundation.github.io/FinNLP/jupyter/Data_Sources_Company_Announcement/). We have added these links to Appendix A of the revised manuscript.
## Reviewer eCf1
We appreciate your insightful feedback! Here are our responses:
**Misalignment between the codebase and paper is the most significant issue: As shown before, the data and functions demonstrated in the paper fail to align with the codebase well.**
We believe there is a misunderstanding in this matter, and we sincerely apologize for any lack of clarity in our GitHub repository. We have attached the important code links in the Response to Common Concerns above. If you need more information, please don't hesitate to bring them to our attention.
**They only evaluate their model in one setting: financial sentiment analysis.**
We would like to address a potential misunderstanding here. Our paper primarily centers around providing financial data sources for training FinLLMs, rather than focusing on the development of advanced FinLLMs. The evaluation is primarily geared towards illustrating the potential accomplishments enabled by our curated financial data sources. Our choice to emphasize financial sentiment is rooted in its paramount significance within the domain, given that numerous financial tasks (such as Robo-Advisors and trading) are related to sentiment analysis. Additionally, in conjunction with sentiment analysis, we have also highlighted FinGPT's performance in the context of low-code development tasks. These case studies effectively exemplify the outcomes achievable through our data sources. We will do more financial tasks in our future work.
**Important baseline missing: the authors only compare their method with the original LLAMA while ignoring a hundred of important baselines.**
Our focus is on datasets instead of bencnmarking or evaluation. Consequently,, we only chose the most important baselines (e.g., BloombergGPT) to showcase the potential of our data sources. To address your concern, we have added addtional baselines and datasets in the Response to Common Concerns above. We will do more extensive evluation with more baselines in our future work.
## Reviewer k4cA
We're grateful for your feedback! See our detailed responses below:
**Please clarify the difference between this paper and the Arxiv paper (https://arxiv.org/pdf/2306.06031.pdf). These two papers are strangely too similar to each other, with slight differences in the authorship. I will also include this issue in the ethics section.**
We thank the reviewer for pointing this out. Our work's focus is on the financial data sources. It also provides evaluation of FinGPT to showcase what can be achieved with our data. In contrast, the Arxiv paper is a concise **vision paper** written by our team, aiming to discuss the future direction of FinGPT. The Arxiv paper is only for communication purpose and **will not be published**.
At the time of our submission, the Arxiv paper was not uploaded, hence it was not cited. To address this, we have explicitly referenced it and discussed the differences in Section 2 of our revised manuscript (highlighted in blue).
**As this paper is focusing on the proposed model, FinGPT, more explanation on model training should be added in the updated version of manuscripts. In addition, the training code of FinGPT should also be included in the public repo.**
Thank you for the feedback. Please find the link to the training code in the Response to Common Concerns above.
**Need more evaluation on the FinGPT. Currently, from the tables in the paper, it seems that FinGPT can only beat the LLMs that have not been tuned on the financial data. It is acceptable that FinGPT cannot beat BloombergGPT. However, I think at least the following baselines should be added for comparison: GPT-4, ChatGPT, GPT-3, BLOOM, GPT-J (tuned on your data), T5 (tuned on your data), etc. In this way, we can understand the performances and abilities of FinGPT.**
Thank you for the suggestion. We have run more experiments for evaluation (please find it in the Response to Common Concerns). We are still in the progress running other baselines you mentioned (GPT-3, BLOOM, T5, etc.). We will update the results when they are ready.
## Reviewer es1M
Thank you for your insightful observations! Allow us to respond to each point below:
**It is unfair and meaningless to compare FinGPT with BloombergGPT from the perspective of training costs. BloombergGPT is a train-from-scratch model while FinGPT is a finetuned model based on other LLMs (LLaMA-65 here) with the help of the LoRA technique. The reduction of the training costs is not surprising and should not be considered as one of the major contributions in this study, since it is brought by fact that the parameter-efficient finetuning is an important and useful technique for fast and low-cost adaptation.**
Thank you for your invaluable feedback. We agree that LoRA holds a pivotal role in diminishing training costs, and LoRA itself is not the contribution of our work. Nonetheless, the reduction in training expenses stands as a novel contribution for the subsequent two reasons.
Firstly, we introduce a novel approach to fine-tune the model through leveraging intrinsic market feedback with Reinforcement Learning. It is important to note efficent fine-tuning is not only about parameter-efficency but also about **labeling-efficency**. Prior work, such as ChatGPT, often demands extensive human involvement in data labeling. In contrast, the labels in our work are automatically generated, which is human-free and efficient.
Secondly, we have implemented the data pipelines to make the application of LoRA possible. Although LoRA has been implemented in other domains, its application within the financial domain remains unexplored until now. The primary hurdle pertains to the lack of financial data. Our contribution lies in unlocking the potential of LoRA with our proposed data pipelines.
To address your concerns, we have rephrased the second contribution in our revised manuscript to clarify the above contributions (highlighted in blue). We are sincerely appreciative of your insights and look forward to hear further feedback from you.
<!--[Daochen will handle. But it is hard to defend? They are indeed non-comparable] -->
**Specifically, (a) For the data process pipeline, how to remove irrelevant data and error corrections, especially for the social data that contains lots of irrelevant information and errors? (b) Does the stop word removal truncate the sentences or documents? Or reduce their fluency? (c) How to compare the data provided in this study (i.e., used for training FinGPT or obtained following the suggested pipeline) with other public datasets?**
<!--[Guoxun: Could you address this and clarify? We need more details]
* 这块其实感觉不是太好补了,缺的东西还是挺多的,我明天和gubo交流下给出一个数据处理pipeline,他这两天生病了,争取也是北京时间明晚前给出。-->
Thank you for the detailed resview. The code and description of the data pipeline can be found in [this notebook](https://github.com/AI4Finance-Foundation/FinNLP/blob/main/test/Data_Cleaning_Pipeline.ipynb). We have also added this link to Appendix B in the revised manuscript.
**Since FinGPT is finetuned on LLaMA-65B, the comparisons between FinGPT and LLaMA on benchmark datasets**
<!--[Guoxun: So what model we are basing on when fine-tuning. Even I am confused.]-->
Thanks for the comment. In this work, we mainly focus on financial tasks, so only financial datasets. We have added results on more financial datasets in the Response to Common Concerns.
<!-- * 这里其实是换实验了,之前那个实验做的有点粗糙,现在统一换6-7B的模型了,这个是有很详细的比较的 [link](https://github.com/AI4Finance-Foundation/FinGPT/tree/master/fingpt/FinGPT-v3) -->
## Reviewer w1jT
Thank you for the positive feedback. Please see our responsse to your concern below
**The claim (applying existing LLMs directly to finance may lead to sub-optimal) lacks sufficient support, specifically, it is hard to define the optimal solution of the proposed method/dataset and also hard to define the sub-optimal solution of GPT4.**
We thank the reviewer for pointing this out. We agree that the optimal solution is hard to define. We have rephrased it to "unsatisfactory".
We would like to highlight that we have provided a detailed code links and additional experiment results in the Response to Common Concerns above. We trust this information will further solidify your support. Please let us know if you have any additional questions.