wcag_AI_validation/README.md

40 lines
1.1 KiB
Markdown

# WCGA AI validator
- Install the required dependencies
```
pip install -r requirements.txt
```
# .env variable
* mllm_end_point_openai='https://hiis-accessibility-fonderia.cognitiveservices.azure.com/openai/deployments/gpt-4o/chat/completions?api-version=2025-01-01-preview'
* mllm_api_key_openai=
* mllm_model_id_openai='gpt-4o'
* mllm_end_point_local='https://vgpu.hiis.cloud.isti.cnr.it/api/chat'
* mllm_api_key_local=
* #mllm_model_id_local='gemma3:12b'
* mllm_model_id_local='gemma3:4b'
use_openai_model='False' # use 'True' if openai else False
## For the CLI version use:
python wcag_validator.py
## For the RESTservice use:
python wcag_validator_RESTserver.py
## For UI use:
python wcag_validator_ui.py
## Docker (file docker placed at path /LLM_accessibility_validator)
### Rest server
docker build -t wcag_resr_server .
docker run --env-file .env -p 8000:8000 --name wcag_rest_server -d wcag_rest_server
### UI
docker build -t wcag_ui .
docker run --env-file UI/.env -p 7860:7860 --name wcag_ui -d wcag_ui
## The scripts folder contains some elaboration scripts. They require a dedicated requirements file