Skip to content
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
url: Use removeAccents instead of lodash in cleanForSlug
  • Loading branch information
tyxla committed Jun 13, 2022
commit 16afd375fd298c63d7b2d8288bba535a0db76b46
15 changes: 7 additions & 8 deletions packages/url/src/clean-for-slug.js
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
/**
* External dependencies
*/
import { deburr, trim } from 'lodash';
import removeAccents from 'remove-accents';

/**
* Performs some basic cleanup of a string for use as a post slug.
Expand All @@ -23,11 +23,10 @@ export function cleanForSlug( string ) {
if ( ! string ) {
return '';
}
return trim(
deburr( string )
.replace( /[\s\./]+/g, '-' )
.replace( /[^\p{L}\p{N}_-]+/gu, '' )
.toLowerCase(),
'-'
);
return removeAccents( string )
Copy link
Contributor

@adamziel adamziel Jun 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would this work too, @tyxla ?

const str = "Crème Brulée"
str.normalize("NFD").replace(/[\u0300-\u036f]/g, "")
> "Creme Brulee"

Found it on StackOverflow – perhaps we could do without the remove-accents dependency entirely? I saw it was based on a characters map so perhaps str.normalize would give us an even better coverage for free?

Copy link
Member Author

@tyxla tyxla Jun 13, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for asking @adamziel, it's actually a great question!

I'd be happy to go with that, it's always ideal to not rely on another package. The thing is, that solution works for most of the characters, but from what I've seen, it doesn't cover them all, or at least, doesn't cover as many as the package I've used covers.

Try this in your browser console - I've built it from the character map of the package:

"ÀÁÂÃÄÅẤẮẲẴẶÆẦẰȂÇḈÈÉÊËẾḖỀḔḜȆÌÍÎÏḮȊÐÑÒÓÔÕÖØỐṌṒȎÙÚÛÜÝàáâãäåấắẳẵặæầằȃçḉèéêëếḗềḕḝȇìíîïḯȋðñòóôõöøốṍṓȏùúûüýÿĀāĂ㥹ĆćĈĉĊċČčC̆c̆ĎďĐđĒēĔĕĖėĘęĚěĜǴĝǵĞğĠġĢģĤĥĦħḪḫĨĩĪīĬĭĮįİıIJijĴĵĶķḰḱK̆k̆ĹĺĻļĽľĿŀŁłḾḿM̆m̆ŃńŅņŇňʼnN̆n̆ŌōŎŏŐőŒœP̆p̆ŔŕŖŗŘřR̆r̆ȒȓŚśŜŝŞȘșşŠšŢţțȚŤťŦŧT̆t̆ŨũŪūŬŭŮůŰűŲųȖȗV̆v̆ŴŵẂẃX̆x̆ŶŷŸY̆y̆ŹźŻżŽžſƒƠơƯưǍǎǏǐǑǒǓǔǕǖǗǘǙǚǛǜỨứṸṹǺǻǼǽǾǿÞþṔṕṤṥX́x́ЃѓЌќA̋a̋E̋e̋I̋i̋ǸǹỒồṐṑỪừẀẁỲỳȀȁȄȅȈȉȌȍȐȑȔȕB̌b̌Č̣č̣Ê̌ê̌F̌f̌ǦǧȞȟJ̌ǰǨǩM̌m̌P̌p̌Q̌q̌Ř̩ř̩ṦṧV̌v̌W̌w̌X̌x̌Y̌y̌A̧a̧B̧b̧ḐḑȨȩƐ̧ɛ̧ḨḩI̧i̧Ɨ̧ɨ̧M̧m̧O̧o̧Q̧q̧U̧u̧X̧x̧Z̧z̧".normalize("NFD").replace(/[\u0300-\u036f]/g, "")

and you'll get:

AAAAAAAAAAAÆAAACCEEEEEEEEEEIIIIIIÐNOOOOOØOOOOUUUUYaaaaaaaaaaaæaaacceeeeeeeeeeiiiiiiðnoooooøoooouuuuyyAaAaAaCcCcCcCcCcDdĐđEeEeEeEeEeGGggGgGgGgHhĦħHhIiIiIiIiIıIJijJjKkKkKkLlLlLlĿŀŁłMmMmNnNnNnʼnNnOoOoOoŒœPpRrRrRrRrRrSsSsSSssSsTttTTtŦŧTtUuUuUuUuUuUuUuVvWwWwXxYyYYyZzZzZzſƒOoUuAaIiOoUuUuUuUuUuUuUuAaÆæØøÞþPpSsXxГгКкAaEeIiNnOoOoUuWwYyAaEeIiOoRrUuBbCcEeFfGgHhJjKkMmPpQqRrSsVvWwXxYyAaBbDdEeƐɛHhIiƗɨMmOoQqUuXxZz

where while the result is satisfactory, you will notice that there are quite a few that actually remain unchanged.

For the library above, we also used the corresponding WP function originally when compiling the character map, so this also gets us closer to a parity with the WordPress backend - a nice side benefit IMHO.

What do you think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice test!

For the library above, we also used the corresponding WP function originally when compiling the character map, so this also gets us closer to a parity with the WordPress backend - a nice side benefit IMHO.

Alright, let's go with the package 👍

.replace( /[\s\./]+/g, '-' )
.replace( /[^\p{L}\p{N}_-]+/gu, '' )
.toLowerCase()
.trim()
.replace( /(^-)|(-$)/g, '' );
}