News From Telecom World

Archive for the ‘Research’ Category

I don’t often write about tech issues, but so much is being written about the Google Chrome OS announcement I thought I’d weigh in.

On the surface, the announcement of a Google operating system seems to many like a shot at rival Microsoft, an attack at MS’s core business. But those who have been following Google’s moves know that it’s more than that — it’s an (expected) evolution in Google’s long-term strategy.

Google is moving everything online, and I really believe this is the future of computing. The desktop model of computing — the Microsoft era — is coming to an end. It’ll take a few years, but it will happen.

The Old Model
For years, the OS has used the desktop analogy, with folders and files, all stored in a big file cabinet (your hard drive). And applications such as Word have run from the hard drive.

What this has meant is that, in order to insure against computer crashes (which are eventually inevitable), you’ve had to back up your files to a remote disk (another drive, a CD-ROM, etc.). It also has meant a headache when it comes to accessing your files and programs from multiple computers — you have to save and sync files all the time, and buy and install multiple copies of applications.

It’s also meant a lot of headaches when it comes to filing and finding your files, and sharing them with other people (this had to be done using floppy disks/CDs, or more recently, email attachments).

Finally, operating systems, trying to do everything, have become bloated and slow, taking up a lot of your computer’s processing power, memory and storage.

The New Model
Google’s model is based on connectivity to the Internet, a model that was unthinkable a decade ago and has only been really viable in the last few years as almost everyone has high-speed connections and wi-fi or mobile access.

Google has moved applications, and increasingly, our files, to the web (or cloud). It started with Gmail’s success — a fast, powerful online email app that beats desktop email apps hands down. It expanded with a suite of simple web apps: Google Calendar, Docs & Spreadsheets, Google Reader, Picasa for photos, eventually YouTube for video, Blogger for writing for the web, and more.

These apps are lightweight but powerful. They aren’t as feature rich as desktop apps, but here’s what many critics don’t understand: in today’s (and tomorrow’s) computing world, they don’t have to be.

While the business world has long used Microsoft Word to create rich documents full of formatting and charts, the increasingly mobile world doesn’t care about any of that. We send emails and text messages and tweets and messages on Facebook and forums and other social media — with no formatting at all. We do blog posts that have bold and italics and links and photos and videos and not much more in terms of formatting text.

We don’t need feature-bloated Microsoft Word anymore. Nor Excel, with its 2 million features, nor PowerPoint (who likes to watch slides?). Sure, there are still some great desktop apps that people use, for photo and video editing and much more … but the majority of us don’t need those. We need to communicate simply and quickly, without hassle.

Web apps don’t match up with desktop apps … but that’s a good thing for most of us who use the new computing model.

Web apps are lightweight and fast. They store all your files online, so you don’t need to worry about syncing them or carrying around CDs or flash drives, or backing up. You can share with anyone you like, or everyone, with a click.

This is what the computing world is becoming, and will be for many years. Google has driven these changes, and when it announced the Chrome browser last year, that was an obvious move to make the browser handle web apps better.

The Chrome OS is an obvious move to make computers bypass the old model of desktop apps and files and folders, and go straight to the web, web apps, and online files. Chrome OS will be lightweight and fast (like the Chrome browser), it will feature web apps and not much else, and it will be perfectly aligned with how more and more of us are using the web — with mobility, speed, sharing, and connecting in mind.

Why Google Music is the Next Logical Step
If you read the Google Chrome OS announcement carefully, you’ll see an interesting item:

“[People] want their data to be accessible to them wherever they are and not have to worry about losing their computer or forgetting to back up files.”

This obviously means Google OS will store all its files online — then people don’t need to worry about backing up the files, and if they lose their computer, nothing will really be lost.

And that makes sense, considering that Google has moved almost all your files online if you use its web apps: emails, photos (in Picasa), videos (in YouTube), documents (Google Docs & Spreadsheets), even pdfs now.

Almost all of your files.

The average user has one other type of file, though: mp3s. Sure, I know there are many other types of files, but I’m only concerned with what most people use computers for these days — email, online reading and social stuff, video, photos, music. And Google has not moved music online yet.

There are already sites that do this, but they’re not Google. So either Google will buy one of the online sites (like it did with YouTube and Blogger and Writely, which became Google Docs), or it will create its own.

Your mp3s will be stored online, and you’ll be able to play them from anywhere. This will complete Google’s goal of keeping all your files online.

Concerns: Connectivity and Privacy
There are two main concerns that people have when cloud computing or web apps are brought up, so we should talk about them briefly:

1. What if you’re not connected to the cloud? You might lose your Internet connection and lose access to your files. This is not a concern for most of us, as we’re almost always connected, more and more each year, especially with data plans on mobile devices (ala the iPhone). However, Google is already addressing this issue with Google Gears and HTML5 — you’ll be able to access your files and use web apps even when offline.

2. Do I really want Google to have all my files and info? This is a valid privacy concern, and I don’t have an answer. My personal feeling is that I don’t have any data that I’m incredibly worried about losing or that might become public. I highly doubt Google would be interested in browsing through my files, as they’re not very interesting. And if Google uses my data to serve up better ads … what do I care? I don’t look at their ads anyway.

However, I understand the privacy concern. It may turn out to be an important issue, or it might just be something we learn to live with, as we have with many other privacy issues (government having access to our financial data, Microsoft getting info from our computers, etc.).

Update: More info on Chrome.

For those interested in creating powerful web video: Check out a toolkit at WebWarriorTools on Making Web Video That Sells. Essentially, it’s a comprehensive learning tool on everything everything you need to create video that makes an impact. It’s a great resource for people with blogs or products that want to learn how to start using video to increase subscribers or sales.

The toolkit launches in a couple of days, and until then there’s a pre-sale that drops the price from $67 to $49 (32% off).

Source: zenhabits


Research released today by mobile device management company Mformation highlights that the mobile phone is becoming increasingly central to consumer lifestyles. A significant amount of information is now stored on mobile devices. 94% of users surveyed store telephone numbers while 65% also store address and other contact information on their phones; 83% have digital photos, 51% have videos, 48% have calendar information and 40% have music downloads. With ever-increasing phone and network capabilities, this trend of using the phone to store valuable and sensitive data from every aspect of life is set to continue.

One consequence of using the phone as a method for creating and storing data and information is that people must now worry about this material if the phone is lost or stolen — 82% of people fear that if their phones were lost or stolen, someone would use the information stored on them for fraudulent means. 90% of those questioned are worried about the loss of their personal data if a mobile device were to go missing, with 72% admitting that the personal information stored on their devices would be difficult to replace. In addition, 40% of respondents even said that losing a mobile would be worse than losing their wallet.

“Mobile phones are becoming more and more essential to user lifestyles,” commented Matt Bancroft, Vice President, Mformation. “People can access the Internet and store significant amounts of valuable personal information and other content on their mobile devices. With new advances in mobile technology arriving every day, this trend will only increase the role of the mobile device in peoples’ lives by providing us with increasing freedom to store, manage, send, and receive information. Mobile operators need to make sure that users are confident that their devices are secure, the data on those devices is protected, and device content can be backed up and recovered if a phone is lost or stolen. Such a high level of dependency on mobile phones today means that operators need capabilities to help minimize risk and maximize trust,” continued Bancroft.

Because mobile phones are being used for such a wide range of activities, when a device is lost, it can prove to be devastating for the user. 91% of people questioned in the UK and US said they would be “devastated” if they lost their mobile phones. For this reason, it is unacceptable that three-quarters of the people interviewed said that it would take a day or more to get a new phone fully up and running with all their personal data after a loss or theft. In fact, 61% of people said that this should take 2 hours or less.

“Operators need to step up to the mark to make sure that their customers are getting the service they expect in terms of security, data recovery and phone setup,” said Bancroft. “As people continue to increase their reliance on mobile phones for everyday actions, operators have to make sure that they are ready to support this increased commitment by the user. More extensive use of the device is great, but the mobile operators need to underpin this activity by offering capabilities to protect and manage users’ data if things go wrong.”

The research was undertaken by independent research house Coleman Parkes, which asked 4,000 people in the UK and US about problems related to mobile usage.

Source: Total Telecom

With stronger competitive pressure from internet players, telecom operators are giving more attention to the option of transformation through unified operations support systems over the best of breed approach.

For Clarity International CEO Tony Kalcina, much of the push is from the web players’ successes in OSS and billing, which ironically take off from the achievements of the telcos themselves.

These new entrants like Google and Facebook come from a completely different mindset in terms of billing and OSS — they just plug the system in, it’s easier to charge for, and the speed is hours and days, not months and years like for telecom.

“What these players have done is that they focus on the real problems and not the symptoms,” Kalcina said. “They’ve gone top-down and, to a large extent, they are riding on the achievements of the telco industry in terms of the ability to make this happen.”

But with telcos’ legacy OSS and BSS platforms, how do they compete with the agile newcomers?

The internet players “are in their honeymoon period when there aren’t so many users and customers don’t pay for services so they are not as demanding,” Kalcina said. “The quality of delivery and the engineering put into the delivery of those services are probably not as high as they are in the telecom industry.”

Telcos, he said, have learned that although it is important to have processes, quality and predictability, it is far more important to understand the actual customers and deliver solutions to them.

“The telcos are talking about transformation – having personalized services, providing a better brand, benchmarking, all sorts of engaging with the hip nature of the end-users that they target,” Kalcina said. “At the same time, the focus is on streamlining and automating so they can become a reliable and dependable service delivery engine.”

More formalized approach

Many operators putting major resources behind this movement and have started to formalize the process, securing buying from their boards. This he said is because transformation takes time, as it is now a far more holistic than just a network or NGN transformation.

“They are transforming because the landscape is changing,” Kalcina said. “These telcos believe they will obviously be out of business if they don’t change the way they interact with and provide solutions to their customers.”

Still, telcos need to be careful of trying to ramp up quality in their entire structures, some of which may not be necessary in terms of what is being delivered to the customer.

Kalcina said the pitfall usually relates to engineers, who sometimes forget the real reason why telcos are doing what they are doing, so the tendency is to go for technical protection unnecessarily.

“For example, telcos have what are called geographic information systems that are very large, complex and expensive projects,” he explained. “One of the classic reasons why these projects fail, cost so much and take so long is that the engineering department wants to understand the network with 120 layers worth of detail.

“But when you stand back and analyze what the problem is , it’s neither an engineering nor a technical problem — it’s a challenge of automating the process of rolling out a network,” Kalcina added. “What you need to know can be summarized in half a dozen layers.”

Currently, it is the analysts who are painting a rosy picture for unified OSS as a desirable strategy. Kalcina said companies espoused the best-of-breed approach because the buying habits of the large telcos are driven by politics and tribal behavior of departments.

“Enterprise architecture is a new development that tends to lend itself for more enterprise-based approaches — solving the automation problem, self-service, and fulfillment and assurance,” he said. “So in the last couple of years you have analysts saying that the only way to solve the next-generation network transformation is through unified systems versus the fragmented best of breed.”

Operators, both those in developed and emerging markets, are in the same boat with regard to competition from internet players. But are they in the same page in how they regard the movement to unified OSS? Kalcina said they all see the need to move away from islands of operational management and service delivery technology to focus on a far more unified customer-centric and automated model.

“The industry has reacted, with players like Amdocs and Oracle moving toward creating a unified solution map,” he said. “They are moving toward the network, creating this engine that we can no longer call OSS but is effectively a converged service delivery factory that allows for customers to choose their own application suite that suits whatever they are trying to do.”

Even then, telcos in emerging markets like in Asia are more receptive to and understand the unified approach more than the developed markets give them credit. These companies – run by executives who want to do things in a bigger, better way — find ways to leapfrog the achievements of established telcos in the West.

This is one reason why Clarity has been Asia-centric, he noted, adding that telcos in Asia are far more interested in listening to new ways and they tend to be early adopters despite the difficulty of changing industry standards and trends.

“When we were starting, best of breed was definitely the way to do things and it was very hard to sell the [unified] story to the big players,” Kalcina said. “We found a lot more receptiveness in the new players, and then later with the incumbents that compete with the new players.”


New research from Wireless Intelligence has revealed that the world’s largest mobile handset vendors are facing severe financial pressures as they continue to invest in their product portfolios during the market slowdown. We estimate that the average operating margin at the top five vendors – Nokia, Samsung, Sony Ericsson, Motorola and LG – declined to around 4% in 1Q09, down from an average of 13% a year ago.

However, as our data shows, vendors have been unable to reduce expenditure in core areas such as R&D and marketing to offset shrinking margins. R&D expenses are estimated to have risen from 10% in 1Q08 to 12% in 1Q09, while marketing costs have risen from 9% to 10% over the same period. Margins in the sector have also been negatively affected by a high-level of product inventories, fierce price competition, and a decline in the Average Selling Price (ASP) of devices.

The figures highlight the predicament faced by handset vendors as they continue to invest in markets where demand is slowing and margins are shrinking. Vendors are looking to cut costs in R&D and marketing during 2009.

In the first quarter of the year, the top five vendors shipped 191 million devices worldwide compared to 236 million a year ago, a 19% decline. Many vendors – including market-leader Nokia – are predicting a further 10% decline in shipments during the current calendar year. Shipments for the current quarter are expected to be flat sequentially and down year-on-year.

Nokia, Sony Ericsson and Motorola all reported an annual decline in revenues in 1Q09, while only Samsung and LG, the two South Korean vendors, reported positive revenue growth. As a consequence, profits have dropped significantly at most vendors.

Our analysis forecasts the following trends in the mobile devices market in 2009:

• cost savings to be made in R&D as vendors look to streamline their portfolio of devices;
• vendors to focus on smartphones to generate high-margins and lower-tier devices to generate high volumes and maintain market share. Less focus on devices in the mid-tier segment;
• vendors embracing content/media services, shifting the battle in the handset space from a pure hardware play to a software-centric strategy;
• an increase in joint-marketing campaigns with mobile operators and distributors;
• the outsourcing of lower-tier devices to Original Design Manufacturers (ODMs) such as HTC, Compal, Foxconn and Asustek.

The current pressures we are seeing in the handset market have been caused mainly by the high level of maturity in developed markets, though the economic downturn has also played a part with regards to unfavourable currency fluctuations. Depending on the level of subsidies and how fast the bill-of-materials for smartphones declines, handset vendors will have to monitor price elasticity and device price erosion in 2009 carefully.

Vendors that do not have a compelling and competitively-priced portfolio of devices ready to ship in time for the crucial last quarter of the year are likely to be in for a rocky ride in 2010.


The principal performance benefit femtocells bring to LTE is to ensure more users receive peak data rates more of the time, especially inside buildings where the most mobile broadband data is consumed and where the service quality is lower than outside.

Femtocells achieve this by maximising spectrum re-use and by ensuring power, and therefore network capacity, is not wasted by trying to penetrate buildings.

Although most discussion to date of femtocells has focused on residential use, the technology is also applicable to enterprises, public spaces or ‘metrozones’ and hot-spots. In this way, femtocells are an important complement to the more traditional macrocell deployment.

There is also growing evidence that femtocells are essential to deliver further improvements in mobile network design, and in the case of LTE, to ensure more subscribers receive peak data rates. LTE is approaching the theoretical maximum information transfer rate (Shannon’s limit) and further improvements will only be possible by rolling out more, smaller cells.

In fact, an analysis of Cooper’s law – which holds that wireless capacity doubles every 30 months – shows that the dominant factor in improvements to date has been the use of smaller cells as opposed to other methods such as revised modulation techniques, better coding or the use of more frequencies.

Maximising spectrum efficiency
Femtocells also allow operators to make the best use of their LTE spectrum. An important quantity of new spectrum will be made available for LTE in the high frequency bands that do not penetrate buildings effectively but are ideal for femtocells.

Where LTE networks are deployed solely in the scarcer and more valuable sub-1Ghz bands, where in-building penetration is better, femtocells will still be required to provide the extra network capacity needed to deliver on the promise of high data rates for all subscribers.

Business case
Femtocells also allow operators to create a more compelling mobile broadband business case. Femtocells can lower considerably the delivery cost per bit through significant savings in cell site installation, maintenance and backhaul costs.

LTE can revolutionise mobile broadband – and femtocells can play a role in helping it to deliver its potential. By adopting femtocells operators can roll out a much better performing LTE network than they could with macro base stations alone and at a lower cost and with less risk. All these factors are crucial in the current uncertain economic environment.”

New services
Femtocells are also designed to power the kinds of next generation services that are expected to epitomise LTE. LTE femtocells will provide the best possible environment for downloading and streaming media from the internet or between devices in the home without loading the mobile network at all.

In the case of sharing media in the home, femtocells will not even require broadband backhaul and will therefore not be limited by throughput restrictions on the network thereby capitalising on the full peak rates of LTE. Additionally femtocells ‘know’ when a consumer is in the home thereby enabling presence-based applications that automatically trigger when a consumer enters, or leaves, the home.

New roll-out strategies
Operators can also use femtocells to lower the cost, and therefore the risk, of LTE rollout by adopting a different strategy to that employed in 2G and 3G networks.

Although traditional macro base stations will still be essential to provide widespread surface coverage, operators can use both indoor and outdoor femtocells from the outset to carry substantial amounts of data traffic thereby realising major savings on backhaul and other associated capacity costs.

Combining femtocells and macrocells in this way allows operators to build their LTE networks incrementally in line with demand and avoid the need to second guess user uptake.

Source: telecom asia

European researchers believe they have achieved what has remained an almost impossible dream in the wireless world: powerful mobile services that work simply, seamlessly and intuitively. European researchers have developed a mobile services platform that is generating a lot of excitement among fellow researchers and developers. It already works with Windows Mobile and Symbian, two leading mobile phone operating systems.

Now there are advanced discussions about porting the system to the iPhone. The system is simple to use, set-up and trust. That is why the researchers called their project Simple Mobile Services (S.M.S.).

S.M.S. holds amazing promise. Even though mobile phones have been around for far longer than web technology, mobile services have historically failed to inspire users. Web services are far more advanced, open and are in a state of constant innovation. S.M.S. wants to make that type of innovation possible for the mobile world.

Great challenges

The challenges are great. Mobile applications do not enjoy universal standards. Even the most modern mobile phones have a comparatively small screen, and a dinky keypad.

These are fundamental constraints that go some way to explaining the slow pace of mobile service development. To meet these challenges, the S.M.S. project has developed a platform and a suite of supporting tools to finally make mobile services a simple reality, saving them from the complicated mess of competing technological paradigms and proprietary silos.
Open, universal and easily deployed

“We wanted to make our Simple Mobile Services platform open source, and universal, so they do not depend exclusively on service providers,” explains Nicola Blefari Melazzi, coordinator of the project.

Businesses that wish to deploy S.M.S. for their workforce need merely to set up a server in their IT department. S.M.S. can be deployed by anyone who wishes to use them, with or without support from network operators.

That is a vital advantage. One of the barriers to mobile services has been control of the technology by network operators. This has limited the universal appeal of mobile services, leading to a plethora of incompatible systems that users have to re-learn each time they change operator.

Openness is a vital advantage for S.M.S., but it is just the beginning of the project’s work. The team has sought to tackle every stage of mobile services production and deployment. In the process, it has developed many highly innovative, and intuitive, tools and functions.

Mad about MEMs

For instance, S.M.S. has developed MEMs (Mobile Electronic Memos). A MEM is an electronic note that users can use to capture information about locations, people, services and websites, and to share the information with friends and colleagues.

Users can capture, annotate and store MEMs associated with their current environment, for example a business card for the restaurant where they are eating or the person they are talking to. Or they can capture a MEM produced by a service they are using, such as confirmation of a booking from an airline.

The software developed by the S.M.S. project makes it easy for users to send MEMs to other users, to share them with a broader community, or to use them as input for online services. So a user in a specific location can capture a MEM for the location and send it to a friend.

The friend can pass the MEM to a navigation tool, which will guide her to where her friend is waiting. It is possible to embed MEMs in emails, or even in mobile instant messages. These adaptations make the MEM accessible to users who do not have special software. MEMs can be made available on the web and easily downloaded by mobile users.

This offers powerful functionality. For example, imagine an airline deploys a MEM to update customers about their flights. You receive your flight time and gate on your phone via a MEM. You then send this MEM to your friend who will pick you up at your destination.

Suddenly the airline changes your departure details. Instead of receiving a new MEM, the original MEM updates with the new information and emits an alert, and so does the MEM of your friend.

Primary technologies

There are possibly dozens of powerful applications from this feature alone, and the innovation does not stop there. The project developed five primary technologies to enable simple mobile services, and MEMs are just the first.

Another is MOVE – short for Mobile, Open and Very Easy – which is a browser for mobile services running on mobile devices. MOVE allows users to instantly access services based on their profile and current context, such as time, location, task and others. It can manage MEMs, supports indoor and outdoor navigation (a map to your departure gate, for example), and many other functions and features.

SMILE, you have APIs

The third innovation from the project is SIM-based security. S.M.S. partners Sagem Orga GmbH and Telecom Italia have developed verification and certificate systems that are hardwired to the SIM card in users’ phones and offer very solid security support to authenticate MEMs, for example.

A Service Authoring Wizard allows developers to quickly come up with new services, without any technical expertise. Users can simply add components together to make a new service, defining simple parameters like date, location and so on.

All this is made possible by SMILE-JS, an abstraction layer between the application and the underlying S.M.S. platform, which enables Java APIs (Application Programming Interfaces). APIs are the ‘sockets’ that allow non-technical users to plug together diverse services on the web to create a mash-up, such as real estate listings and Google Maps, or Flickr content.

Powerful applications

Added together, these components and tools make developing, deploying and, crucially, using mobile services extremely simple. Combined, they pave the way to powerful new mobile applications and services.

For example, street features like signs, traffic lights or shop displays could emit a MEM to nearby mobile phones, based on user profiles. Imagine you get lost in a foreign city and you do not know the language so can’t read addresses or input them in a GPS navigator.

With S.M.S. tools, you can capture your location and send it via a MEM to a friend. The friend can pass the MEM to a navigation tool which will guide her to where you are waiting. Or, you could set your profile up for ‘tourism’, and your phone could find and retrieve MEMs from nearby points of historical or cultural interest. Set the profile to business and it looks for taxi ranks, or banks, or a specific conference or restaurant.

The technology could be used for ‘integrated vertical’ applications, targeting particular markets, companies or user populations, or a combination of all three. This could enable mobile city services, enhanced e-ticketing, fleet management, parcel handling, community services, and much more.

The system and authoring wizard are so simple that many of these types of task could be set-up and deployed by people with no technical expertise.

Trialling S.M.S. with students

S.M.S. has trialled its platform with students at the University of Roma II. The system helps students to network, and informs them when a class has changed venue or has been cancelled. The technology was very popular with the 100-strong sample of students who used it.

The project finished its work in February 2009. S.M.S. has stimulated considerable interest in industry. The project partners are working intensively to explore prospects for commercial exploitation.

“We want to build an open-source developer community around the technology, and of course we would be interested in talking to anyone who is interested in our work,” says Nicola Blefari Melazzi.
Viable solution

It is early days of course, but the potential upside is enormous. Mobile services are, as yet, an unsolved problem – incredible, considering the potential for revenue and new services that bridge the gap between cyberspace and the real world.

Yet the S.M.S. project has completed a full system that can be deployed now on existing technology, simply by downloading and installing the client to mobile phones, and setting up an appropriate server.

The project has created a viable solution to the unsolved problem of universal, functional, open and simple-to-use mobile services.

The S.M.S. project was funded by the ICT strand of the EU’s Sixth Framework Programme for research.

Source: cellular news

The latest data from IMS Research shows that the SIM card market continues to defy the economic downturn that is currently hitting the mobile market. Total shipments of SIM cards in 2008 grew to reach the record number of 3.07 billion. This is in marked contrast to the numbers from manufacturers which showed handset shipments declining sharply in the second half of the year.

Research Director, John Devlin, said, “The well publicised slump continues to affect the mobile sector in 2009. Subscriber growth has virtually come to a halt in many mature markets and is slowing in emerging markets. Similarly the handset market is struggling with manufacturers scaling back operations and restructuring in an effort to minimise costs. We only expect subscriber numbers to grow 6.9% this year, which is a long way down from the 18.6% recorded in 2008. In terms of handset volumes the outlook is even worse; IMS Research is forecasting the market to shrink by 7.8% to 1.09 billion in 2009.”

However, the latest figures from its new report shows SIM card volumes to rise by 419 million this year, pushing the market to a new high of 3.49 billion. So the question is why this is occurring? And what is driving this?

“Much of this is down to the simple fact that the cost of providing a SIM card is much lower than that of providing a handset,” added Devlin. “With this level of price differential a SIM-only proposition is a much more attractive option for both network operators and consumers. It saves the operator the cost of subsidising a handset to make it more affordable for end-users, whilst providing a way to continue to sign up new customers – and perhaps more importantly re-sign existing ones. At the same time these savings can be passed onto subscribers in the form of more competitive, lower cost contracts and pre-paid bundles.”

This is contributing to the increasingly competitive grey area between operators and handset manufacturers. Operators have traditionally relied on new handsets to support new services and attract customers. However, developments mean that consumers taking advantage of SIM-only offers need not miss out on new services if they retain existing handsets. Late last year Sagem Orga announced that it was partnering with BlueSky Positioning for A-GPS-enabled SIM cards to deliver services such as “buddy finder”, dating and nearest shop/restaurant/ATM. Similarly, earlier this year Oberthur Technologies launched a SIM card enabled with an accelerometer which will enhance applications and interactive services requiring motion sensing capabilities. Extending this trend, increasingly SIM cards will feature secure elements for mobile banking and NFC payments, and greater memory capacity and smart card web servers (SCWS) for enriched services and Internet usage.

“With everyone trying to define their unique selling point and gain an edge that justifies their long term competitiveness in this lucrative market, this will continue to be an interesting area of convergence and competition” concludes Devlin. “Whilst some of the mobile market has been hard hit by the economic downturn, it may just provide the catalyst for these new SIM-based developments to come to the fore.”

Source: cellular news


  • None
  • shah: i want make need talk to you. regurds
  • yaseen vohra: salam iam producer&director other tv sir i have new program idea tv shwo sub kahdo 4 zong u r intrust tu plz e mail me ya call this
  • md. arfan hossain (hitlar): i want to start the voip business in bd from chittgong . now i don't know how to start that. could you give me some suggestion for doing that?