A couple of new features

Hello everyone,

we have been working in some small new functionalities that we would like to show. Here it goes:

  • Pack descriptions: now you can textually describe the packs of sounds you create. It is as simple as going to the pack page and clicking to “Add a description for the pack…” button that will appear if you’re logged in as the author of the pack.
  • Browse by comments:?in the sounds page there is a new browse link called “browse latest comments“. Following that link you will find a paginated list of all the comments that you are writing about sounds (sorted by date). ?This functionality introduces a new way of browsing sounds by reading what you people are saying about them!
  • Captcha in messages:?from now on, people that have uploaded at least one sound won’t have to fill the captcha form when sending messages.
  • Sharing geotag maps: another new feature is the possibility to share and embed geotag maps. Once you are navigating in the geotags page (sounds > browse geotags), you’ll see a new “Share this map” link below the map (this link only appears when your zoom level is greater than 2). Clicking to this link you’ll find html code for embedding the map in an external page such as a blog (preserving the current displayed sounds, zoom and map position) and an url pointing to this “portion” of the map that you can easily share. Tweaking a bit the given iframe code, embedded maps can have custom with and height parameters. Just as an example for you:

  • Shareable link to this portion of the map

     

  • New API features:?finally, we have also implemented two new features for our API. We added a new resource (the sound geotags resource) that allows to retrieve a list of sounds geotagged inside a defined rectangular area.
    In the other hand, we have implemented a new way to select the information about sounds that is returned in any sound list. From now on, this can be done using an optional “fields” parameter that allows to specify a list of properties separated by commas (e.g. fields=id,duration,tags). For more details check the API documentation.

Well, that is all for the moment, enjoy the new features and keep freesounding!!!

– frederic

This entry was posted in Nightingale. Bookmark the permalink.

6 Responses to A couple of new features

  1. Peter says:

    Great! I should upload a few more sounds, geotag them, and make a little page on my website… maybe one day I’ll get ’round to some more interesting freesound mashing…

    It would seem interesting to be able to query for sounds matching particular analysis descriptor value ranges… I wonder if this is on the cards?

  2. frederic.font says:

    Hi Peter,
    notice that the geotag embed takes ALL sounds inside a geographic rectangle, so if you embed a map in your page not only your sounds will appear but also other sounds geotagged in the same zone. However we might improve this in the near future.

    Regarding being able to query by particular analysis descriptors, this is something we have been talking about. For the moment we think it is a bit complicated because it is hard to integrate such a *technical* and *specific* thing that requires a deep knowledge to understand how it works in a *general purpose* search interface. We have been thinking on implementing some functionality to allow the definition of custom similarity measures for similarity search (using analysis descriptors), but always asking for a “target” or “example” sound (query by example). However, we would definitely like to provide this kind of functionality in a future, probably first through the API!

    Thank you for your comments!

    – frederic

  3. Peter says:

    Ah, I didn’t notice about the embed being only based on a specific region; still, good work.

    As for querying analysis descriptors: I was only thinking of the API, with a view to using it to perhaps make some interactive algorithmic composition type thing… if I find the time…

    Am I right in thinking that all the analysis is done automatically to all sounds in the database, such that writing the code to expose it for querying in the API should be fairly trivial?

    Thanks for responding!

    Peter

  4. frederic.font says:

    As you say, all analysis is done automatically right after sounds have been described. However, we store descriptor information in a separate database specially prepared to perform the similarity search (GAIA). This database also offers us the possibility to query for descriptor ranges, but we would still need to implement the “binding” to communicate with GAIA (similarity search works as a separate web service, so we would need to implement another web service for descriptor querying).

    Summarizing: we have the infrastructure but still the implementation is not trivial. Nevertheless, I think this could be a very cool feature for the API and I’m quite sure we will implement that some day (hopefully not too far)!

  5. wisslegisse says:

    Hi,

    I just read an announcement about similarity search, that led to this website.
    http://www.imagine-research.com/home
    I couldn’t tell whether the software is open source or not, though it was developed with support from an NSF grant, so it should be available to the public under some terms.

    I post it here in case it has any relevance for you.

  6. Peter says:

    Ah, the classic difference between ‘fairly trivial’ and ‘really trivial’…

    When applied to the word trivial in the context of a computer program, ‘fairly’ can mean a few man-months effort 😉

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.