Easyling Release Notes – 2016 March

March Madness has passed in Budapest, and the result is a number of new features ready to keep Easyling at the forefront of the industry. Some of our highlights included TransMapping (the ability to map and translate URL path segments), support for the srcset attribute on images, UI for the cache exclusion rules, and the ability to exclude pages from a crawl based on regular expressions. See the full points after the jump!

URL TransMapping (BETA)

The ability to translate URLs, not just page content, has long been an objective for us. The first step in this direction was the introduction of subdirectory publishing, which allowed our users to publish their content on virtual subpaths instead of subdirectories. But now, this has reached its virtual completion (lacking a user interface).

Essentially, TransMapping will allow users to define a “map”, relating path segments on the source to each target language. When proxying content, links will be rewritten according to the map, which is then used to restore the original reference to request new content from during navigation on the server-side. This allows for true seamless experience for the visitor, as even the URLs will appear to be in the target language.

srcset Support

The srcset attribute allows the provider to provide numerous source files for the same content, in different sizes and pixel densities, letting the browser to automatically choose the best one based on system information such as display density and viewport size. With this recent addition, the Proxy will extract all sources from the set, offering each for replacement in the Resources view – once all are localized, this allows the browser to choose from the same full set of translated resources, for the same experience from movie screens to mobile phones.

Cache Exclusion UI

While the feature has been public for some time, there was no UI for this rather useful ability: to exclude specific paths, based on prefixes, from the Source Cache. This is most useful when the page in question has some active content that absolutely requires fresh data each time, and cannot be frozen in time, regardless of bleeding through.

Now, the UI allows you to specify exclusion rules for each cache, and content matching those path prefixes/globs will not be used during serving – letting the request fall through the proxy to the original server. While this introduces the risk of bleedthrough into the project, it eliminates the perhaps greater risk of stale data being served.

 Regex-based Crawl-time Exclusion

Sometimes, you may encounter projects, where it is not sufficient to exclude pages based on prefixes. Sometimes, you just have to use regular expressions. The proxy now gives you this power: during the crawl only, pages with source URLs matching the regular expression you enter will be skipped. However, they can still be added on by visits through the proxy, and content on them will be translated as usual – the pages are not excluded/externalized, only skipped! Also, although the regex itself is saved, usage of the feature requires you to activate it each time!

Stay tuned for future updates in April as well!

Dedicated demo for You