live chat Live chat
insight by wfx

Welcome to Wordwide FX's new enterprise!

Insight by WFX is a synthesis of our passion for languages and the financial markets. Here you will find technical and fundamental analyses from our clients, media partners and contributors in different languages, as well as discussions on languages and translation. And of course we will keep you updated on what is happening inside Wordwide FX Financial Translations. Hope you enjoy it! Greetings from the Wordwide FX team!

quotes
ES postlanguage-image
08/09/2017

¿Se está dando la vuelta el EURUSD? Lo intenta…

author-image

By Greg Michalowski @GregMikeFX, Director of Client Education at ForexLive. Translated by Wordwide FX Financial Translations

pero aún tiene mucho por hacer.

¿Se está dando la vuelta el EURUSD? Es decir, ¿da señales de agotamiento?

Seguro que los fans del RSI ya están viendo divergencias – bueno, confieso que les he echado un vistazo pero ya sabéis que no me gusta esta herramienta.

Estas otras pistas son más de mi estilo:

  • El EURUSD marcó ayer un nuevo máximo sobre 1,20694 (máximo del 29 de agosto y de enero de 2015), que se extendió a 1,2092. Pero resultó ser una falsa ruptura y ahora operamos lejos ya de ese nivel en 1,2045.
  • Si echamos un vistazo al gráfico de cinco minutos, veremos que el precio ha descendido bajo las MM de 100 y de 200 sesiones (líneas azul y verde) en 1,2958 y 1,29496, respectivamente. Si confirmamos bajo estas medias, las ventas pueden continuar.

¿Qué daría más confianza a los vendedores?

El 38,2% de la subida que partió del mínimo de ayer está en 1,2023. El cierre de ayer está en 1,1221. Siempre es tener una acción alcista fallida que salir en un día positivo y que acabe siendo negativo. Así pues, este es el próximo paso para seguir bajando.

Más allá, los traders tienen el soporte natural de los 1,2000, y también anda cerca el 50% de la misma acción alcista en 1,20025.

Así pues, sí que hay cierto retroceso en el EURUSD, pero en cada acción hay mucho quehacer para que siga la corrección. Por suerte, los técnicos hacen lo que tienen que hacer, no solo para definir el riesgo (no me sorprende) son también para definir los objetivos que harían una corrección más cómoda para los vendedores.

post-image
quotes
15/02/2016

A linguist, a polyglot, and a translator walk into a bar...

author-image

By Wordwide FX Financial Translations

Via Languages Around the Globe

By Brian Powers

The more I write the more it becomes painfully and abundantly clear that outside of language circles, the general population does not really understand much about the practical aspects of language or the differences between many of the titles bestowed upon various roles within the language community.

Many people have done no more with language than master (hopefully!) their own native tongue and perhaps taken a course in a secondary language while in school. But whether you’re a diehard language enthusiast or a newcomer to the language learning community there are certain terminologies that are important to bear in mind when discussing the multifaceted landscape of language and linguistic individuals.

I’ve written this piece to better explain to you the differences between translators, interpreters, linguists and polyglots – titles often used incorrectly to describe members of our community.

Translator vs Interpreter

A common misconception is that people only learn languages to convert one language to another.

This is because normally, the only time someone sees a need for a multilingual person is when something needs to be translated or interpreted into their own language. They may not even understand the distinction between these terms, which is not entirely their fault. Even the media frequently seems to confuse “translator” with “interpreter” when dealing with a person that does not speak their language.

An interpreter is used when people are speaking in real-time, while a translator is used when someone needs text to be read or converted to another language. The skill sets required for each are very different and should not be confused with one another; a person that acts as one may not be suited for the other and vice versa.

Believe it or not, it matters a lot, especially to the translation and interpreting communities who -surprise, surprise – take this stuff very seriously.

An interpreter needs to have a huge knowledge of vocabulary in their  language of choice and the ability to make an immediate translation. A translator should also have a great vocabulary for translating written text, but he or she also has the opportunity to rely upon other sources and, while still often on a tight schedule, is able to take more time to acquire the optimal terminology for a specific website, advertisement, article or any number of other publications.

The translator has the time to think about how to structure sentences properly as well as the ability to make corrections later before a final submission to their client. An interpreter needs to have the ability to construct sentences that make sense to their clients immediately. They must also have a working knowledge of recent changes to the language, like idioms and slang, so as not to misunderstand what the persons they’re interpreting for might actually be implying .

As an example: if the phrase “This course is a piece of cake” was to be interpreted literally into many languages other than English, it would make no sense. What does learning have to do with dessert? Idioms like that are also more common in spoken language, whereas written language is usually quite a bit more formal and thus less likely to employ such  informal phrases or slang.

Linguists

For whatever reason, most people outside the field of linguistics don’t seen to have a clue as to what linguistics is actually about.

Many people seem to be under the delusion that a “linguist” is someone who simply knows many languages. If you’re a linguist you’ve probably heard this assertion before, rolled your eyes, and launched into a tirade about the degree to which that’s not what you do!

In fact a linguist can be monolingual! It’s somewhat uncommon due to the nature of the field, but being a language scientist does not always require multilingualism in its participants.

The best way I have seen this term explained is that a linguist is one who studies the science of language, which includes the physical aspects of languages, such as their sound structures, syntax, relation to other languages and culture, and their evolution. Various linguistic terminology can be tossed in, such as “morphology”, “semiotics” and “phonology” to further confuse the outsider.

It could be explained that linguists sometimes do learn multiple languages, and are occasionally polyglots or language learners themselves, but are normally more interested in the components, not the entire entity, much the way a nutritionist is more interested in the value of individual foods and how they can come together to promote better health while a chef is of an artist, creating the meal as an experience. The chef might be interested in the health value of the ingredients and the nutritionist might cook some meals, but they are not to be confused, they simply work with the same medium: food.

I have examined some of the more famous linguists and their lives. One of them was the Swiss born Ferdinand de Saussure who is widely recognized as the creator of the modern theory of structuralism as well as the father of modern linguistics of the 20th century. He laid the foundation for many developments in linguistics, and his ideas of linguistics as part of a general science of signs, which he called “semiology” or “semiotics” influenced many generations of contemporary linguists.

Many language enthusiasts could probably say they have heard of de Saussure, and some might even be able to say what he did. Most would likely not understand references, however, because it’s most definitely not relevant to learning a language in most cases. Saussure could certainly be considered to be a polyglot, for he did learn Latin, Sanskrit, Greek, English, German and French as a youth, then later added more to his lingual collection.

Another major player in the field of linguistics was Edward Sapir, most famously known for the Sapir-Whorf Hypothesis, which outlined his observations on how linguistic differences have consequences in human cognition and behavior. That is, our language affects our views and actions. Many language learners have at least heard of the hypothesis, even if they know nothing of the men – Edward Sapir and Benjamin Whorf – behind it.

Sapir also contributed greatly to the classification of Native American Indian languages, so his name also might be known by anyone who studies those.

My point is; that while linguists have certainly made many contributions to language learning, the effects of which might even influence some of what a language learner might deal with, but linguists and multilinguals or polyglots are not interchangeable.

Multilinguals

The terms bilingual and trilingual are probably understood – meaning a person speaks two or three languages, respectively. However, I find personally that those terms are usually reserved for people that learned their languages naturally, like second generation immigrants to a country in which they learn the native language of their new country while also speaking their own native tongue within the family. In some parts of the world, like parts of Canada, where the population often speaks both English and French, with both of those being official languages, bilingualism is a fact of daily life.

Multilingual can also be used as a simple basket term for anyone who speaks multiple languages – usually more than two.

Polyglots

So what is someone that learns several languages? People adopt a variety of terms. Some simply say they are multilingual while others describe themselves as language enthusiasts (a phrase I am fond of!)

The most widely used term I have encountered is polyglot. The word comes from the Greek “poly” (many) and glotta (language). While this should represent someone who speaks several languages, it has come to mean anyone speaking more than two or three. Not everyone who knows a few languages feels comfortable with calling themselves polyglots – I certainly don’t consider myself to be a polyglot -, however, feeling that term should be reserved for someone that speaks more languages than they do.

There is a related term sometimes used – hyperpolyglot – meaning a person that speaks more than twelve languages. The term omniglot means “all languages”, but I have never heard anyone refer to themselves in that manner, since it is impossible for anyone to know all 7000+ languages.

Hyperpolyglots

They are the rock stars of the language world: hyperpolyglots are a very rare animal – the unicorns of the language enthusiast community.

While polyglots are often inspired by other polyglots, the true hyperpolyglot is the pinnacle of language greatness, a title that often gives way to great envy, skepticism and criticism. Just as celebrities in society are fawned, so are the hyperpolyglots discussed, tested, and often dismissed as unrealistic.

Why such a critical look at these individuals? I think it is a mixture of jealousy that someone could exhibit such super human qualities as well as the general feeling of mistrust at such bold claims. One might accept that a person could be a chef in a restaurant, but not a master chef, renowned throughout the world.

Conclusion

It’s important, within the language community, to know the difference between these categories. Many linguists are polyglots, many polyglots are interpreters, and some linguists have been known to interpret, but in general it behooves the average Joe to know the difference.

Which category do you belong to? Which of these do you most closely identify as? Leave a comment and let me know!

post-image
quotes
11/12/2015

Chomsky was right, researchers find: We do have a ‘universal grammar’ in our head

author-image

By Wordwide FX Financial Translations

Via PsyPost.com

A team of neuroscientists has found new support for MIT linguist Noam Chomsky’s decades-old theory that we possess an “internal grammar” that allows us to comprehend even nonsensical phrases.




“One of the foundational elements of Chomsky’s work is that we have a grammar in our head, which underlies our processing of language,” explains David Poeppel, the study’s senior researcher and a professor in New York University’s Department of Psychology. “Our neurophysiological findings support this theory: we make sense of strings of words because our brains combine words into constituents in a hierarchical manner–a process that reflects an ‘internal grammar’ mechanism.”

The research, which appears in the latest issue of the journal Nature Neuroscience, builds on Chomsky’s 1957 work, Syntactic Structures (1957). It posited that we can recognize a phrase such as “Colorless green ideas sleep furiously” as both nonsensical and grammatically correct because we have an abstract knowledge base that allows us to make such distinctions even though the statistical relations between words are non-existent.

Neuroscientists and psychologists predominantly reject this viewpoint, contending that our comprehension does not result from an internal grammar; rather, it is based on both statistical calculations between words and sound cues to structure. That is, we know from experience how sentences should be properly constructed–a reservoir of information we employ upon hearing words and phrases. Many linguists, in contrast, argue that hierarchical structure building is a central feature of language processing.

In an effort to illuminate this debate, the researchers explored whether and how linguistic units are represented in the brain during speech comprehension.

To do so, Poeppel, who is also director of the Max Planck Institute for Empirical Aesthetics in Frankfurt, and his colleagues conducted a series of experiments using magnetoencephalography (MEG), which allows measurements of the tiny magnetic fields generated by brain activity, and electrocorticography (ECoG), a clinical technique used to measure brain activity in patients being monitored for neurosurgery.

The study’s subjects listened to sentences in both English and Mandarin Chinese in which the hierarchical structure between words, phrases, and sentences was dissociated from intonational speech cues–the rise and fall of the voice–as well as statistical word cues. The sentences were presented in an isochronous fashion–identical timing between words–and participants listened to both predictable sentences (e.g., “New York never sleeps,” “Coffee keeps me awake”), grammatically correct, but less predictable sentences (e.g., “Pink toys hurt girls”), or word lists (“eggs jelly pink awake”) and various other manipulated sequences.

The design allowed the researchers to isolate how the brain concurrently tracks different levels of linguistic abstraction–sequences of words (“furiously green sleep colorless”), phrases (“sleep furiously” “green ideas”), or sentences (“Colorless green ideas sleep furiously”)–while removing intonational speech cues and statistical word information, which many say are necessary in building sentences.

Their results showed that the subjects’ brains distinctly tracked three components of the phrases they heard, reflecting a hierarchy in our neural processing of linguistic structures: words, phrases, and then sentences–at the same time.

“Because we went to great lengths to design experimental conditions that control for statistical or sound cue contributions to processing, our findings show that we must use the grammar in our head,” explains Poeppel. “Our brains lock onto every word before working to comprehend phrases and sentences. The dynamics reveal that we undergo a grammar-based construction in the processing of language.”

This is a controversial conclusion from the perspective of current research, the researchers note, because the notion of abstract, hierarchical, grammar-based structure building is rather unpopular.

post-image
quotes
07/12/2015

What's the hardest language to whisper in?

author-image

By Wordwide FX Financial Translations

Via quora.com

By Marc Ettlinger, PhD

Whispering involves maintaining a triangular opening in the vocal cords while speaking, allowing air to pass through without vibrating. There are some additional changes to articulation used as compensation to increase audibility, but lack of vocal cord vibration is the main feature. 




Vocal cord position while whispering


This serves to eliminate any voicing contrast used in a language, like the difference between /p/ and /b/. Thus, other features differentiating these sounds, like length, now serve as distinguishing cues.

In terms of what language is hardest to whisper: Whether you count signed languages or whistled languages is just a matter of semantics. (I question whether the whistled language isn't simply a variant of a  spoken language used for long distances and has any special properties.) Obviously you can try to sign surreptitiously or whistle softer or louder, reflecting the intent of whispering, but not the actual mechanics.

Some languages lack a voicing contrast for consonants, including many Dravidian Languages & Korean (see figure below). However, these languages lack the voiced consonants (b, d, g), so the voiceless Cs will be present, at least in some contexts. Point being, speakers of all languages will have had practice with voiceless phonation, which is what is needed for whispering.



Korean phonological inventory, lacking voiced consonants


In terms of consonants that are hard to devoice, all are nearly equivalent as far as I know. Rather, it's the voicing of certain consonants, e.g., f, that can be challenging. 

Ultimately, lacking voicing is primarily challenging from the perspective of perception, not production. We can ask, why is anything voiced at all? The reason is that it increases perceptibility: it increases sonority and serves to carry formant and pitch information, which is otherwise hard to hear for voiceless, a.k.a. whispered, segments. 

So, languages with lots of consonant place contrasts (like Hindi), and language with tonal  contrasts (like Mandarin Chinese) will be harder to understand when whispered.

post-image
quotes
22/10/2015

The bilingual advantage in phonetic learning

author-image

By Wordwide FX Financial Translations

Via Cambridge University Press

Blog post written by Mark Antoniou and Patrick Wong based on an article in Bilingualism: Language and Cognition 

Fundamental questions concerning language learning remain unanswered. Some foreign learners are able to acquire a foreign language very successfully, whereas others are frustrated by their lack of progress. It is not clear why some learners flourish while others in the same setting struggle. Our study, published in Bilingualism: Language and Cognition sought to shed some light on this topic.

Numerous factors are thought to be advantageous for non-native language learning although they are typically investigated in isolation, and the interaction between them is not understood. Firstly, it is often claimed that it is easier for bilinguals to acquire a third language than it is for monolinguals to acquire a second. This may be due to cognitive advantages associated with bilingualism, knowledge of a greater number of phonetic features, or greater perceptual flexibility that comes from having already learned an additional language. Secondly, closely related languages may be easier to learn because learners may benefit from their existing knowledge and fast-track their learning. Closely related languages are likely to share common features, and may thus allow a learner to skip having to learn those features. Thirdly, anecdotal evidence suggests that certain phonetic features (and perhaps even certain languages, more generally) might be universally more difficult to acquire regardless of prior language experience.

We tested each of these hypotheses in a series of experiments in which adults learned several artificial languages with vocabularies that differentiated words using foreign phonetic contrasts. In the first experiment, Mandarin–English bilinguals outlearned English monolinguals for both Mandarin-like and English-like languages, and both groups found the Mandarin-like (retroflex) artificial language easier to learn than the English-like (fricative voicing). In the second experiment, bilinguals again outlearned English monolinguals for the Mandarin-like artificial language. However, only Korean–English bilinguals showed an advantage over monolinguals for the more difficult Korean-like (lenition) language. Thus it seems that bilinguals, relative to monolinguals, show a general advantage when learning ‘easy’ phonetic contrasts, but similarity to the native language is useful for learning universally ‘difficult’ contrasts. These findings raise interesting new questions that we are pursuing in subsequent language learning experiments concerning the interaction between the characteristics of the language to be learned and individual differences among learners.

Read the full article ‘The bilingual advantage in phonetic learning’ here

post-image
quotes
04/08/2015

Sah-ry, eh? We’re in the midst of the Canadian Vowel Shift

author-image

By Wordwide FX Financial Translations

Via McCleans

By Maegan Campbell 

Out with “oot.” No more “aboot.” Canada is talking with a New Speak. In a linguistic pivot called the Canadian Vowel Shift, we are pronouncing “God” more like “gawd,” “bagel” like “bahgel,” “pillow” like “pellow,” and “sorry” less like “sore-y.” The word “Timbit” is becoming “Tembet,” and “Dan slipped on the staircase” now sounds more like “Don” “slept” on it. First discovered in 1995, the new vowels are contagious, spreading rapidly from Victoria to St. John’s, where linguists are mapping the frequency of people’s voices and using ultrasounds to track their tongue and lip placement.

“We’re in the middle of a transformation,” says Paul De Decker, a sociolinguist at Memorial University of Newfoundland. “Our vowels are getting higher and backer in the mouth, and it’s more widespread, more diverse than we initially thought.”

Some linguists compare the shift to “Valley Girl” speech, which is perhaps most dramatically demonstrated by an American comedian in the hit YouTube video, “Shoes.” The chorus, “Shoes. Oh my God, shoes,” sounds more like, “Shahs, ah my gawd, shahs.” More mildly in Canada, we find the shift in the Air Canada pre-flight safety video when we hear, “Welcome aboard Air Canada.” Compared to a 1986 version, the “Canada” is now pronounced farther back in the mouth, like “Cahnadah.”

These changes in the mouth are happening under our noses. Even though the new pronunciation is used every day, almost nobody has heard of it—not the president of Canada’s association of university and college English teachers, nor the national director of Teachers of English as a Second Language. As it creeps into our speech under the level of social awareness, the vowel shift is known as a “change from below,” with a suspected epicentre in urban Ontario.

Wait, what the hall? De Decker explains the shift as a result of Canadian tolerance. As immigrants and visitors arrive with different accents, we have come to tolerate variation and to play with language ourselves. “If we weren’t tolerant,” he says, “we would crack down and say, ‘No, that’s not how it’s pronounced.’ Instead, we’ve started to push the envelope even further.”

With young women initially leading the shift, some experts suggest they subconsciously adopted it from California as a way to portray a more trendy identity. De Decker says the new Canadian vowels only partly resemble Valley Girl speech, and that the similarities may be coincidental; still, he agrees the new vowels are in vogue. “It’s like a badge saying, ‘These are all the people I’ve met, and I have the vowel system to prove it.’ ”

The Canadian Vowel Shift has now shot far beyond urban youth. One study heard the shift to be equally advanced in Thunder Bay as in Toronto, and others have found it among seniors as old as 90. “People who don’t consider themselves innovative or hip are showing it,” says De Decker. We can even hear it in theCorner Gas theme song: “You think there’s not a lot goin’ on, but look closer, baby, you’re so wrong.” The “think” almost sounds like “thenk,” and “lot” is more like “lawt.”

The first person to discover the shift, Sandra Clarke, a linguist at Memorial University, says Canadians have long held potential for a change in their speech, based on their relaxed pronounciations of many words. For example, we say “cough” without the harsh “quaff” sound that might make us crank our heads in the U.S., and we say “caught” the same as “cot,” without pronouncing the a or u at all. “When you have open space like that, vowels don’t have to stay in their places,” says Clarke. “The opportunity is there for new ones to move in.”

Scholars debate which vowels have changed the most. Clarke thinks the consonants within words affect whether or not we shift our pronunciation of the vowels. The shift is most obvious, she says, in words with fricatives, which are letter combinations such as “th” and “sh.” “Shovel” is more like “shawvel,” and “thank you” resembles “thahnk you.” “I wouldn’t be surprised if fricatives are in the lead,” she says.

Although these sneaky vowels might jeopardize the sound of Canada’s iconic lingo, they are also helping unite us. Since the same change is happening in Red Deer as in Montreal, we may find decreasing distinction between accents. For bilingual people, the new pronunciation could even get carried over into their French, leading to more similarities in the sounds of the two languages. The English version of “baguette” stops rhyming with “vague-ette,” and “decor” stops resembling “de-core.” Meanwhile, the shift is distinguishing Canada even more from the U.S., where an estimated 34 million people around the Great Lakes Region are showing an opposite change called the Northern Cities Vowel Shift. There, God is becoming “gad,” “Dan” is becoming “din,” “slipped” is getting closer to “slapped,” and “sorry” more like “sarry.”

Aside from perhaps making spelling bees tougher, the current vowel shifts may well have lasting significance. The Great Vowel Shift of the 14th- to 18th centuries marked the leap from Middle to Modern English, with Norman pronunciations rapidly changing words such as “lake” to no longer rhyme with “latté,” as they do in other Germanic languages. That shift was responsible for most of the irregularities in English—the thousands of words pronounced differently than they are spelled. The changes today could lead to even more oddities in English in Canada and the U.S. Vowel shifts are messy.

post-image