The only web scraping service you'll ever need that offers advanced features that are simple to use for efficient data extraction.
$ dotnet add package ScrAPI![]()
ScrAPI is your ultimate web scraping solution, offering powerful, reliable, and easy-to-use features to extract data from any website effortlessly.
This is the official .NET SDK for the ScrAPI web scraping service.
ScrAPI can be found on NuGet and can be installed by copying and pasting the following command into your Package Manager Console within Visual Studio (Tools > NuGet Package Manager > Package Manager Console).
Install-Package ScrAPI
Alternatively if you're using .NET Core then you can install ScrAPI via the command line interface with the following command:
dotnet add package ScrAPI
You can start scraping websites with as little as three lines of code:
var client = new ScrapiClient("YOUR_API_KEY"); // "" for limited free mode.
var request = new ScrapeRequest("https://deventerprise.com");
var response = await client.ScrapeAsync(request);
// The result will contain the content and other information about the operation.
Console.WriteLine(response?.Content);
The API client implements the interface IScrapiClient which can be use with dependency injection and assist with mocking for unit tests.
// Add singleton to IServiceCollection
services.AddSingleton<IScrapiClient>(_ => new ScrapiClient("YOUR_API_KEY"));
The API provides a number of options to assist with scraping a target website.
var request = new ScrapeRequest("https://deventerprise.com")
{
Cookies = new Dictionary<string, string>
{
{ "cookie1", "value1" },
{ "cookie2", "value2" },
},
Headers = new Dictionary<string, string>
{
{ "header1", "value1" },
{ "header2", "value2" },
},
ProxyCountry = "USA",
ProxyCity = "NewYork",
ProxyType = ProxyType.Residential,
UseBrowser = true,
SolveCaptchas = true,
IncludeScreenshot = true,
IncludePdf = true,
IncludeVideo = true,
RequestMethod = "GET",
ResponseFormat = ResponseFormat.Html,
ResponseSelector = "//div[@class='content']",
CustomProxyUrl = "https://user:password@local.proxy:8080",
SessionId = Guid.NewGuid().ToString(),
CallbackUrl = new Uri("https://webhook.site/"),
};
For more detailed information on these options please refer to the documentation.
When the UseBrowser request option is used, you can supply any number of browser commands to control the browser before the resulting page state is captured.
var request = new ScrapeRequest("https://www.roboform.com/filling-test-all-fields")
{
UseBrowser = true,
AcceptDialogs = true
};
// Example of chaining commands to control the website.
request.BrowserCommands
.Input("input[name='01___title']", "Mr")
.Input("input[name='02frstname']", "Werner")
.Input("input[name='04lastname']", "van Deventer")
.Select("select[name='40cc__type']", "Discover")
.Wait(TimeSpan.FromSeconds(3))
.WaitFor("input[type='reset']")
.Click("input[type='reset']")
.Wait(TimeSpan.FromSeconds(1))
.Scroll(1000)
.Evaluate("console.log('any valid code...')");
The response data contains all the result information about your request including the HTML data, headers and any cookies.
var response = await client.ScrapeAsync(request);
Console.WriteLine(response.RequestUrl); // The requested URL.
Console.WriteLine(response.ResponseUrl); // The final URL of the page.
Console.WriteLine(response.Duration); // The amount of time the operation took.
Console.WriteLine(response.Attempts); // The number of attempts to scrape the page.
Console.WriteLine(response.CreditsUsed); // The number of credits used for this request.
Console.WriteLine(response.StatusCode); // The response status code from the request.
Console.WriteLine(response.ScreenshotUrl); // The URL of the screenshot file if included.
Console.WriteLine(response.PdfUrl); // The URL of the PDF file if included.
Console.WriteLine(response.VideoUrl); // The URL of the video file if included.
Console.WriteLine(response.Content); // The final page content.
Console.WriteLine(response.ContentHash); // SHA1 hash of the content.
Console.WriteLine(response.Html); // Html Agility Pack parsed HTML content.
foreach (var captchaSolved in response.CaptchasSolved)
{
Console.WriteLine($"{captchaSolved.Value} occurrences of {captchaSolved.Key} solved");
}
foreach (var header in response.Headers)
{
Console.WriteLine($"{header.Key}: {header.Value}");
}
foreach (var cookie in response.Cookies)
{
Console.WriteLine($"{cookie.Key}: {cookie.Value}");
}
foreach (var errorMessage in response.ErrorMessages ?? [])
{
Console.WriteLine(errorMessage); // Any errors that occurred during the request.
}
This SDK also provides a number of convenient extensions to assist in parsing and checking the data once retrieved.
Html Agility Pack is included.
Hazz is another good option if you need more HTML parsing methods.
The SDK provides a static class to define the defaults that will be applied to every ScrapeRequest object.
This can greatly reduce the amount of code required to create new requests if all/most of your requests need to use the same values.
// Set default that will apply to all new `ScrapeRequest` object (unless overridden).
ScrapeRequestDefaults.ProxyType = ProxyType.Residential;
ScrapeRequestDefaults.UseBrowser = true;
ScrapeRequestDefaults.SolveCaptchas = true;
ScrapeRequestDefaults.Headers.Add("Sample", "Custom-Value");
// Any new request will have the corresponding values automatically applied.
var request = new ScrapeRequest("https://deventerprise.com") { ProxyType = ProxyType.Tor };
Debug.Assert(request.ProxyType == ProxyType.Tor); // Overridden
Debug.Assert(request.UseBrowser);
Debug.Assert(request.SolveCaptchas);
Debug.Assert(request.Headers.ContainsKey("Sample"));
The SDK provides wrappers for basic lookups such as the credit balance of an API key and a list of supported country and city codes to use with the ProxyCountry and ProxyCity request options.
Easily check the remaining credit balance for your API key.
var balance = await client.GetCreditBalanceAsync();
var supportedCountries = await client.GetSupportedCountriesAsync();
// Use the Key value in the ProxyCountry request property.
foreach (var country in supportedCountries)
{
Console.WriteLine($"{country.Key}: {country.Name}");
}
var supportedCities = await client.GetSupportedCitiesAsync("USA");
// Use the Key value in the ProxyCity request property.
foreach (var city in supportedCities)
{
Console.WriteLine($"{city.Key}: {city.Name}");
}
Any errors using the API will always result in a ScrapiException.
This exception also contains a property for the HTTP status that caused the exception to assist with retry logic.
var client = new ScrapiClient("YOUR_API_KEY"); // "" for limited free mode.
var request = new ScrapeRequest("https://deventerprise.com");
try
{
var result = await client.ScrapeAsync(request);
Console.WriteLine(result?.Content);
}
catch (ScrapiException ex) when (ex.StatusCode == System.Net.HttpStatusCode.InternalServerError)
{
// Error messages from the server aim to be as helpful as possible.
Console.WriteLine(ex.Message);
throw;
}
// The result will contain the content and other information about the operation.
Console.WriteLine(result?.Content);
The SDK includes Html Agility Pack as a dependency. If you are looking for additional third party libraries that work well with Html Agility Pack (CCS selectors, crawling etc) to assist with your data extraction requirements take a look at the following packages: https://html-agility-pack.net/third-party-library