Detailed workflow
Week | Task | Status | Comments |
20-May | Study Work: State of art on the models, optimization and Evaluation | Done | Look for optimization techniques, how they evaluate anonymization models. |
27-May | Finalizing Dataset and Libraries to use -- suppression/rename/ .. etc. | Done | Kubernetes logs/Metrics, Openstack logs/metrics .. any data that has PII information |
3-June | Anonymization Impact on the Model's utility | Done | |
10-June | Done | ||
17-June | Containeration and the APIs | ||
24-June | Automation using Python | ||
1-July | Testing of the containerized Architecture | ||
8-July | NLP Model for anonymizing Telco Data | ||
15-July | |||
22-July | |||
29-July | |||
5-Aug | Evaluation of the Model | ||
12-Aug | Integration of the developed model with the architecture | ||
19-Aug | Documentation and release of the code. | ||
26-Aug | [BUFFER] |
...
- Precision and Recall: These metrics are commonly used to assess the performance of NLP models in text anonymization. Precision measures the proportion of correctly anonymized information among all the information that the model labeled as sensitive, while recall measures the proportion of correctly anonymized information among all the sensitive information present in the text.
- F1 Score: The F1 score provides a balanced evaluation of the model's performance in anonymizing text data. It considers both false positives and false negatives, offering an assessment of the model's effectiveness.
- But we need to have the ground truth for testing the validity of the models using the above methods.
- To test the decrease in the utility of the text, one way is to train a model before anonymization and to train again after anonymization to check the difference in the performance. Lesser the difference, better the anonymization process.
- Human Evaluations: Human evaluations involve experts assessing the anonymized documents for re-identification risks and data utility preservation.
Reference Research papers:
- https://aclanthology.org/2021.acl-long.323.pdf (Showcases the problems and the evaluation methodology for anonymization models)
- https://www.researchgate.net/publication/347730431_Anonymization_Techniques_for_Privacy_Preserving_Data_Publishing_A_Comprehensive_Survey (A survey for different types of techniques)
...
- Metrics like precision, recall, and F1-score can be used to assess how well the method identifies sensitive information.
- https://github.com/anonymous-NLP/anonymisation/blob/main/aggregated_annotations.pdf I also thought of to somehow compare the anonymization with the one given so as to have a valid approval for the model's performance.
- However, the impact on models requires domain-specific evaluation. Some approaches that I will follow are:
- Compare model performance: Train and test models on original and anonymized data to see the accuracy drop.
- Evaluate information loss: Measure how much relevant information is lost due to anonymization.
Anonymization Impact on the Model's utility
The work has been updated on the personal page to prevent exposure of undergoing progress.