# Turkish Semantic Relations Dataset
Turkish Semantic Relations Dataset is a collection of annotated semantic relations dataset. The main goal for this dataset is to be used for semantic language understanding.
## Dataset Details
This dataset consists 127203 meaning relations of Turkish words collected from TDK Dictionary and Wikidictionary, each of which is annotated with labels as follows:
| Label | Description |
|-------|-------------|
| ANTONYM | given words are antonyms |
| AT_LOCATION | word1 is at word2 |
| BY_GOAL | word1 aims for word2 |
| CREATED_BY | word1 is created by word2 |
| SIMILAR_TO | word1 is similar to word2 |
| HYPERNYMY | word1 is the hypernym of word2 |
| MADE_OF | word1 is made of word2 |
| PART_OF | word1 is part of by word2 |
| SYNONYMY | given words are synonyms |
| USED_FOR | word1 is used for word2 |
### Samples
Example:
```
{'word1': 'çok',
'relation_type': 'ANTONYM',
'word2': 'az',
'frequency': 1},
```
### Fields
Fields of each instance are presented below.
| field | dtype |
|----------|------------|
| word1 | string |
| word2 | string |
| relation_type | string |
| frequency | integer |
### Splits
No train/validation/test split is provided by the author.
## Dataset Creation
### Curation Rationale
Words presented are collected from TDK Dictionary and Wikidictionary.
### Data Source
awaiting response from the author.
### Annotations
Explain the annotation process and the annotators. Indicate if a procedure is different during the annotation process. Example:
"Each of the fields included are submitted by the user with the review or otherwise associated with the review. No manual or machine-driven annotation was necessary."
### Quality
Comment on the dataset quality. Include details about the cleanness of the dataset, and the quality of the annotations (try to find interannotator agreement info).
"We observed that this dataset contains duplications. The text samples seems clean. The interannotator agreement (IAA) rate was measured by the authors, they reported `cohen's kappa = 0.83` as an IAA rate."
### Personal and Senstive Information
Indicate if any personal and/or sensitive information is present in the dataset. Example:
"Amazon Reviews are submitted by users with the knowledge and attention of being public. The reviewer ID's included in this dataset are quasi-anonymized, meaning that they are disassociated from the original user profiles. However, these fields would likely be easy to deannoymize given the public and identifying nature of free-form text responses."
## Considerations
### Social Impact of Dataset
The expected impact of the dataset on the society. What is aimed to be changed within the society? Example:
"This dataset is part of an effort to encourage text classification research in languages other than English. Such work increases the accessibility of natural language technology to more regions and cultures. Unfortunately, each of the languages included here is relatively high resource and well studied."
### Discussion of Biases
Indicate if any bias is present within the dataset. Example:
"The data included here are from unverified consumers. Some percentage of these reviews may be fake or contain misleading or offensive language."
### Other Known Limitations
Point out the limitations that are not or not appropriate to be specified above. Example:
"The dataset is constructed so that the distribution of star ratings is balanced. This feature has some advantages for purposes of classification, but some types of language may be over or underrepresented relative to the original distribution of reviews to acheive this balance."
## Additional Information
### Dataset Curators
List the names of the creators of the dataset. Example:
"Published by Phillip Keung, Yichao Lu, György Szarvas, and Noah A. Smith. Managed by Amazon."
### Citation Information
Include a way of citing the information given with the dataset. Example:
Please cite the following paper (arXiv) if you found this dataset useful:
Phillip Keung, Yichao Lu, György Szarvas and Noah A. Smith. “The Multilingual Amazon Reviews Corpus.” In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020.
```
@inproceedings{marc_reviews,
title={The Multilingual Amazon Reviews Corpus},
author={Keung, Phillip and Lu, Yichao and Szarvas, György and Smith, Noah A.},
booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing},
year={2020}
}
```