Use Web Sites as Content for HumanX Bots
In addition to specifying the HumanX content in a text box, the platform may automatically periodically scrape a list of Web sites to retrieve documents, and add them to the relevant content for HumanX bots.
Check if your plan includes this feature or contact customer support to request it.
The Web Scraper operates by automatically scanning a list of URLs you provide. This collected data is then processed and made available to HumanX bots, allowing them to respond based on content specific to your application.
Setting Up Your Web Scraper
To begin using the Web Scraper, navigate to SETTINGS→Advanced→HumanX and input your desired URLs.
For each URL, the Scraper will retrieve pages/documents under that Web path. For example, if the URL is https://example.com, then the Scraper will retrieve pages like https://example.com/abc.html.
The scraping generally occurs once a day, during the night. There is a maximum number of pages limit set for your account. If you need to scrape more pages, talk to your account manager.
The content in the retrieved documents will be combined with the content in the training textbox in SETTINGS→Advanced→HumanX.
Leveraging Web Data in HumanX
The Web Crawler collects data continuously, feeding it into HumanX. When a conversation requires information that aligns with the web data, HumanX seamlessly integrates this knowledge into its responses.