Buch, Englisch, 75 Seiten, Format (B × H): 147 mm x 225 mm, Gewicht: 162 g
Reihe: Elements in Quantitative and Computational Methods for the Social Sciences
Buch, Englisch, 75 Seiten, Format (B × H): 147 mm x 225 mm, Gewicht: 162 g
Reihe: Elements in Quantitative and Computational Methods for the Social Sciences
ISBN: 978-1-108-95850-9
Verlag: Cambridge University Press
Text contains a wealth of information about about a wide variety of sociocultural constructs. Automated prediction methods can infer these quantities (sentiment analysis is probably the most well-known application). However, there is virtually no limit to the kinds of things we can predict from text: power, trust, misogyny, are all signaled in language. These algorithms easily scale to corpus sizes infeasible for manual analysis. Prediction algorithms have become steadily more powerful, especially with the advent of neural network methods. However, applying these techniques usually requires profound programming knowledge and machine learning expertise. As a result, many social scientists do not apply them. This Element provides the working social scientist with an overview of the most common methods for text classification, an intuition of their applicability, and Python code to execute them. It covers both the ethical foundations of such work as well as the emerging potential of neural network methods.
Autoren/Hrsg.
Fachgebiete
- Sozialwissenschaften Soziologie | Soziale Arbeit Soziologie Allgemein Empirische Sozialforschung, Statistik
- Mathematik | Informatik EDV | Informatik Daten / Datenbanken Data Mining
- Mathematik | Informatik EDV | Informatik Daten / Datenbanken Automatische Datenerfassung, Datenanalyse
- Interdisziplinäres Wissenschaften Wissenschaften: Forschung und Information Forschungsmethodik, Wissenschaftliche Ausstattung
Weitere Infos & Material
1. Introduction; 2. Ethics, Fairness, and Bias; 3. Classification; 4. Text as Input; 5. Labels; 6. Train-Dev-Test; 7. Performance Metrics; 8. Comparison and Significance Testing; 9. Overfitting and Regularization; 10. Model Selection and Other Classifiers; 11. Model Bias; 12. Feature Selection; 13. Structured Prediction; 14. Neural Networks Background; 15. Neural Architectures and Models.