Best Practices For Securing Your Application
When integrating with Repliers APIs, it is crucial to ensure that your application is secure from abusive users and bots. Our APIs are designed to be used server-side, providing you with the tools and best practices to protect your application and its users. This guide will walk you through the necessary steps to secure your API requests, including how to use the X-Repliers-Forwarded-For header for additional rate-limiting.
To protect your API key and secure your application, it is important to make requests to our APIs server-side. Client-side requests, such as those made directly from a user's browser or mobile app, expose your API key to the public, making it vulnerable to misuse.
Instead, your application should follow this pattern:
Client-Side Request: The user's client (e.g., web browser or mobile app) sends a request to your server.
Server-Side Proxy: Your server receives the request and processes it. The server then makes a request to the Repliers API on behalf of the client.
Response Handling: Your server receives the API response, processes it if needed, and sends the final response back to the client.
Even when using a server-side proxy, it is essential to implement rate-limiting middleware to prevent abusive users and bots from overloading your application. Rate limiting helps manage the number of requests a user can make within a specific time frame, protecting your resources and ensuring fair usage.
Many libraries and frameworks offer rate-limiting middleware. Depending on your technology stack, you can easily integrate such a middleware into your server-side proxy to throttle requests based on criteria such as IP address, user session, or API key.
For an additional layer of security, Repliers' API Firewall supports the X-Repliers-Forwarded-For header. This header allows you to pass the user's IP address from your server-side proxy to Repliers' APIs. By providing this information, Repliers can apply rate-limiting measures directly on the user's IP address, adding an extra layer of protection against abusive behavior.
Here’s an example of how you can implement the X-Repliers-Forwarded-For header in a typical server-side request using Node.js with the axios library:
In this example:
The X-Repliers-Forwarded-For header is set to the user's IP address, extracted from the incoming request.
The server-side proxy makes the API request to Repliers, passing along the X-Repliers-Forwarded-For header.
The graph below shows the decrease in API requests for a subscriber's website following the implementation of the Repliers API Firewall. You can observe an immediate reduction in the number of API requests.
Repliers will not rate-limit requests originating from popular search engine IP addresses. While it is beneficial to allow search engines to crawl your website for SEO purposes, it is important to be aware that these crawls count toward your API request quota. If you are not interested in allowing search engine crawler traffic to access your APIs, you should configure your robots.txt file to instruct search engines not to crawl specific parts of your website.
The robots.txt file is used to manage how search engine crawlers interact with your website. Here’s an example of how to disallow search engines from crawling certain sections of your site:
In this example:
User-agent wildcard: Applies the rules to all crawlers.
Disallow: /api/: Prevents all crawlers from accessing the /api/ directory, which might be where your API requests are routed.
User-agent: Googlebot: Allows Googlebot full access, overriding the global rule.
User-agent: Bingbot: Completely blocks Bingbot from crawling the site.
Keep API Keys Secure: Never expose your API keys in client-side code. Always use server-side proxies for API requests.
Implement Rate Limiting: Use rate-limiting middleware in your server-side proxy to prevent abusive users from overloading your application.
Use X-Repliers-Forwarded-For Header: Enhance security by passing the user's IP address to Repliers using the X-Repliers-Forwarded-For header. This allows our firewall to monitor and limit requests on a per-user basis.
Manage Search Engine Traffic: Use robots.txt to control which parts of your site search engines can crawl, balancing the need for SEO with the impact on your API usage.
By following these guidelines, you can significantly enhance the security of your application when integrating with Repliers APIs. Implementing a server-side proxy, rate limiting, using the X-Repliers-Forwarded-For header, and managing search engine crawler traffic are critical steps in ensuring your application remains secure and efficient.
For further assistance or questions, please reach out to our support team.
Why Server-Side Requests?
To protect your API key and secure your application, it is important to make requests to our APIs server-side. Client-side requests, such as those made directly from a user's browser or mobile app, expose your API key to the public, making it vulnerable to misuse.
Instead, your application should follow this pattern:
Client-Side Request: The user's client (e.g., web browser or mobile app) sends a request to your server.
Server-Side Proxy: Your server receives the request and processes it. The server then makes a request to the Repliers API on behalf of the client.
Response Handling: Your server receives the API response, processes it if needed, and sends the final response back to the client.
Implementing Rate Limiting
Even when using a server-side proxy, it is essential to implement rate-limiting middleware to prevent abusive users and bots from overloading your application. Rate limiting helps manage the number of requests a user can make within a specific time frame, protecting your resources and ensuring fair usage.
Many libraries and frameworks offer rate-limiting middleware. Depending on your technology stack, you can easily integrate such a middleware into your server-side proxy to throttle requests based on criteria such as IP address, user session, or API key.
Using the X-Repliers-Forwarded-For Header
For an additional layer of security, Repliers' API Firewall supports the X-Repliers-Forwarded-For header. This header allows you to pass the user's IP address from your server-side proxy to Repliers' APIs. By providing this information, Repliers can apply rate-limiting measures directly on the user's IP address, adding an extra layer of protection against abusive behavior.
Sample Implementation
Here’s an example of how you can implement the X-Repliers-Forwarded-For header in a typical server-side request using Node.js with the axios library:
const axios = require('axios');
const handleClientRequest = async (req, res) => {
const userIp = req.connection.remoteAddress;
try {
const response = await axios.get('https://api.repliers.com/listings', {
headers: {
'REPLIERS-API-KEY': `YOUR_API_KEY`,
'X-Repliers-Forwarded-For': userIp,
},
params: {
// Your API request parameters here
},
});
res.json(response.data);
} catch (error) {
console.error('Error making API request:', error);
res.status(500).json({ error: 'Internal Server Error' });
}
};
module.exports = handleClientRequest;
In this example:
The X-Repliers-Forwarded-For header is set to the user's IP address, extracted from the incoming request.
The server-side proxy makes the API request to Repliers, passing along the X-Repliers-Forwarded-For header.
Result: Abusive Traffic Blocked
The graph below shows the decrease in API requests for a subscriber's website following the implementation of the Repliers API Firewall. You can observe an immediate reduction in the number of API requests.
Handling Requests from Popular Search Engines
Repliers will not rate-limit requests originating from popular search engine IP addresses. While it is beneficial to allow search engines to crawl your website for SEO purposes, it is important to be aware that these crawls count toward your API request quota. If you are not interested in allowing search engine crawler traffic to access your APIs, you should configure your robots.txt file to instruct search engines not to crawl specific parts of your website.
Example of robots.txt Configuration
The robots.txt file is used to manage how search engine crawlers interact with your website. Here’s an example of how to disallow search engines from crawling certain sections of your site:
User-agent: *
Disallow: /api/
Disallow: /private/
# Allow Googlebot to crawl everything
User-agent: Googlebot
Disallow:
# Block a specific crawler (e.g., Bingbot) from the entire site
User-agent: Bingbot
Disallow: /
In this example:
User-agent wildcard: Applies the rules to all crawlers.
Disallow: /api/: Prevents all crawlers from accessing the /api/ directory, which might be where your API requests are routed.
User-agent: Googlebot: Allows Googlebot full access, overriding the global rule.
User-agent: Bingbot: Completely blocks Bingbot from crawling the site.
Best Practices
Keep API Keys Secure: Never expose your API keys in client-side code. Always use server-side proxies for API requests.
Implement Rate Limiting: Use rate-limiting middleware in your server-side proxy to prevent abusive users from overloading your application.
Use X-Repliers-Forwarded-For Header: Enhance security by passing the user's IP address to Repliers using the X-Repliers-Forwarded-For header. This allows our firewall to monitor and limit requests on a per-user basis.
Manage Search Engine Traffic: Use robots.txt to control which parts of your site search engines can crawl, balancing the need for SEO with the impact on your API usage.
In Summary
By following these guidelines, you can significantly enhance the security of your application when integrating with Repliers APIs. Implementing a server-side proxy, rate limiting, using the X-Repliers-Forwarded-For header, and managing search engine crawler traffic are critical steps in ensuring your application remains secure and efficient.
For further assistance or questions, please reach out to our support team.
Updated on: 29/11/2024
Thank you!