If you'd like to customize the solution accelerator, here are some ways you might do that:
- Ingest your own audio conversation files by uploading them into the
cu_audio_files_all
lakehouse folder and run the data pipeline - Deploy with Microsoft Fabric by following the steps in Fabric_deployment.md
-
Create Fabric workspace
- Navigate to (Fabric Workspace)
- Click on Data Engineering experience
- Click on Workspaces from left Navigation
- Click on + New Workspace
- Provide Name of Workspace
- Provide Description of Workspace (optional)
- Click Apply
- Open Workspace
- Create Environment
- Click
+ New Item
(in Workspace) - Select Environment from list
- Provide name for Environment and click Create
- Select Public libraries in left panel
- Click Add from .yml
- Upload .yml from here
- Click Publish
- Click
- Retrieve Workspace ID from URL, refer to documentation additional assistance (here)
***Note: Wait until the Environment is finished publishing prior to proceeding witht the next steps.
-
Deploy Fabric resources and artifacts
- Navigate to (Azure Portal)
- Click on Azure Cloud Shell in the top right of navigation Menu (add image)
- Run the run the following commands:
az login
***Follow instructions in Azure Cloud Shell for login instructionsrm -rf ./Conversation-Knowledge-Mining-Solution-Accelerator
git clone https://github.com/microsoft/Conversation-Knowledge-Mining-Solution-Accelerator
cd ./Conversation-Knowledge-Mining-Solution-Accelerator/Deployment/scripts/fabric_scripts
sh ./run_fabric_items_scripts.sh keyvault_param workspaceid_param solutionprefix_param
- keyvault_param - the name of the keyvault that was created in Step 1
- workspaceid_param - the workspaceid created in Step 2
- solutionprefix_param - prefix used to append to lakehouse upon creation
-
Add App Authentication
Follow steps in App Authentication to configure authenitcation in app service.
All files WAV files can be uploaded in the corresponding Lakehouse in the data/Files folder:
- Audio (WAV files): Upload Audio files in the cu_audio_files_all folder.
- To process additional files, manually execute the pipeline_notebook after uploading new files.
- The OpenAI prompt can be modified within the Fabric notebooks.