In an announcement made by Google, the company will no longer be supporting its original AJAX crawling scheme from back in 2009. Google will “no longer be using the AJAX crawling scheme” at the beginning of the second quarter of 2018.

This isn’t surprising, since a number of years ago that Google said they’d be no longer officially and fully supporting this method of AJAX crawling.

According to Google, it will crawl and render your AJAX-based sites as is.  In a new blog post by John Mueller of Google wrote “Googlebot will render the #! URL directly, making it unnecessary for the website owner to provide a rendered version of the page.”  Based on this information, this means that Google is going to continue to support these URLs in their search results.

Based on some tests, Google said it is expects “AJAX-crawling websites won’t see significant changes with this update.” When the flip happens, We’ll have to see who complains about it.  Google has shared these tips with webmasters in order to prepare:

  • Verify ownership of the website in Google Search Console to gain access to the tools there, and to allow Google to notify you of any issues that might be found.
  • Test with Search Console’s Fetch & Render. Compare the results of the #! URL and the escaped URL to see any differences. Do this for any significantly different part of the website. Check our developer documentation for more information on supported APIs, and see our debugging guide when needed.
  • Use Chrome’s Inspect Element to confirm that links use “a” HTML elements and include a rel=nofollow where appropriate (for example, in user-generated content)
  • Use Chrome’s Inspect Element to check the page’s title and description meta tag, any robots meta tag, and other metadata. Also, check that any structured data is available on the rendered page.
  • Content in Flash, Silverlight, or other plugin-based technologies needs to be converted to either JavaScript or “normal” HTML if the content should be indexed in search.

Source –