There is no universal solution to power your device, the right battery really depends on your requirements. To find the right powering solution you need to consider a certain number of parameters. We will try in this post to list most of them. This post is not exhaustive: I’m not a battery expert. This post is based on my own experience and you may consider it as a starting point, not a solution.
The battery dimension is one of the first criteria with autonomy when making an IoT device. I’m starting with this one because dimension is one of the first criteria limiting your expectations in term of autonomy. If you have no space constraint you are lucky ! Battery choice will be easier.
Basically we have different type of batteries (non exhaustive):
- Button cells: these batteries are mostly based on lithium technology and propose 3V voltage. You can find some up to 1000mAh with a compact form factor. Usually they are not able to deliver strong current and for LPWAn you need to use them with a super capa to supply the transmission energy.
- Cylindrical cells: you will find here the largest number of batteries and mix of technologies. In the AAA form factor your choices will be a bit limited but starting from AA (or 14500) you will find a lot of solutions. Thanks to the large choice of technologies you will start with 1 cell delivering up to 3.7V for powering your device.
- Cubic cells: most of the cubic solutions will be based on Li-Po technology. You can find any size depending on the capacity you are looking for. As Li-Po are able to deliver large current pics it makes this powering solution efficient and low cost for tiny devices.
As a complement of the size of the battery itself, you need to consider the way the battery will be connected to your device / object. I’ve spend a lot of time on this topic as the mechanical consideration can quickly be a bit complex.
Use of battery holder is really more easy but quickly expensive and will need a lot of space when you have multiple cells. The use of connector for battery with wire is a good compromise.
As there is no standard for connector / polarity you will have to specify it when ordering batteries. When ordering batteries on alibaba, this point was not a problem, they can make what you want even for small quantity. This is just a factor of stress until you have the battery in your hands.
the voltage question
Directly related to the dimension question comes the voltage question. It is not only related to the battery but also to your circuit design. We usually need to power MCU at 3.3V ; we can provide more or less of this voltage from the batteries.
step up approach
With a Step-up approach, the battery source will be lower than 3.3V and a step-up circuit will increase the voltage up-to 3.3V. This will be useful for powering with alkaline cells (1.5V) of NiMh (1.2V)… you can power with one cell or 2 cells and deliver the expected voltage. The number of cells is reduced where price and dimension criteria will matter. The cells will have to deliver more current. The current pic will grow as the cell voltage will decrease. the risk of getting voltage drop at cell end of life is high.
step down approach
This is the most usual approach, the battery source will be higher than 3.3V and step-down circuit will decrease the voltage down-to the 3.3V. Most of the Lithium solutions are higher than 3.3V. This also let you add cells to get a larger capacity. Depending on technologies it is not recommended to use battery in parallel as you have a risk to charge one cell from another one. This could damage the cells. Putting battery in series will allow to avoid this risk.
The choice of a setup-down circuit operating in a large voltage range will give you larger battery / autonomy choices. As an exemple, you can select 3×1,5V low cost batteries (4,5V – 8.4W) or 3×1,2V low cost NiMh rechargeable cells (3,6V – 7,2W). You can also get up-to 3x3V LiMnO2 cells (9V – 18W) or 3×3.6V LiSOCl2 ( 10.8V – 24W). You see a factor of 3 in the energy density. For the same volume you can get 3 times the autonomy. This is an interesting option you let to your customer: price vs autonomy.
step nothing approach
In this case you will directly connect the battery to the device with no voltage conversion. It means your battery is delivering something really near the 3.3V. When using Lithium battery at 3V, this solution is viable. For 3.7V Lithium solution I do not recommend it. In any case you need to take a look to the battery specifications:
- The indicated voltage is the voltage plateau, not the maximum one. So for a 3.7V Li-Po battery you have a starting point at 4.2V. This is usually above the circuits limits.
- The voltage will decrease along the battery life, depends on the technology it can be 1% or 50%. As an example a 1.5V alkaline will decrease down-to 0,8V linearly.
the electric current question
Different battery technologies drive different quantity of current. This is not a problem if you do not need power and have no peak. This question is a key point as soon as you need high peak of currant. This is typically the case when making LPWAn IoT devices. This is almost critical when you are operating in zone where transmission reach > 20dB.
Technologies like Alkaline, LiPo… are able to deliver large electric current and won’t be a problem on that point. Technologies like LiSoCl2 or coin cells batteries will not able to drive a such power. The use of super-capacity circuit will resolve a such situation. A such circuit addition will impact the device cost. It also have to be decided during object design phase.
The battery capacity is a key factor for your device autonomy. On this point you need to be really careful on what is written on the battery itself. This is particularly true on no-name battery but also on branded batteries. The most serious battery providers will give you a data-sheet with battery specifications and discharge curves. This information is not sufficient to select your battery but it will help you to trust the given capacity.
With no data-sheet, do not trust what is printed on the battery !!
To illustrate this, I’ve tried different batteries :
- One LiFePo4, given for 900mAh. I’ve been able to get 400mAh at a maximum from this battery.
- One Lithium-Ion, given fro 2800mAh. I’ve been able to get 350mAh at a maximum from this battery.
With such error ratio I let you imagine what will be you client satisfaction if you claim for 1year autonomy and he have to change the battery only after a month.
I really recommend to verify the capacity of the battery by discharging some of them. Select different loads to create your own documentation on the battery you are using. A tool like OTII will help you a lot for this.
iot specific factor on capacity vs autonomy
The IoT area is really specific for the batteries: we have an alternance of deep-sleep period where the consumption is tens of uA followed by a strong pic of current for communication. The pic is about 50mA in Europe, 120mA in North America for most of the LPWAn.
When reading a battery data-sheet you will get information based on constant current consumption. Extrapolating these information to the reality of your consumption is dangerous.
The best is to simulate as much as possible your device behavior. There are some simulation tools (but what is the reality?). You can also use (once again) a tool like OTII to create a consumption profile and run it on your battery. The problem is that you can’t execute 10 years of measure for real before launching your product. So I see two ways to do it:
- The first way is to create a consumption profile with real pics followed by a short deep-sleep period. Just what you need to let the battery sleep a bit. This way you can discharge the battery on a couple of days. We can consider this as a worst case.
- The other way is to create different discharge profiles based on different load around the average load you are expecting. You run it for a couple of day / weeks. Then you apply your real device consumption and see what simulated profile match your real consumption. This way you will never have a full discharge scenario but you will see the impact of pics on the battery capacity.
As an example, on this picture you can see the difference between an equivalent power driven continuously versus driven with peaks
This step is one of the most complex in your battery selection and is where crystal ball prediction /experience is acting. My recommendation is to never commit on autonomy until you have a real simulation or experience. If you have to do it, consider a large margin.
On top of this you need to consider the environmental conditions. They will have a large impact on your battery choice and its capacity.
low temperature impact
The low temperature, under 0°C will have different impact on your IoT device. As your device is deep-sleeping, when waking-up, the clock will make more time to correctly wake-up. When exposed to such condition you can have a higher consumption by integrating in your firmware some warmup code running software for a longer period. This is a side effect. The main impact of cold temperature will be on batteries. The capacity of the battery to deliver current pic can be reduced. The voltage of the battery can also be reduced (increasing the current pic). All of this can directly affect your radio transmission quality.
So, for device exposed to low temperature you really need to create a battery profile at lowest temperature and you really need to verify what is happening with current pics in such conditions. You may identify capacity impact and voltage impact.
warm temperature impact
The warm temperature, higher than 40°C will also have an impact on your battery devices. The main impact is a risk of fire or explosion depending on your battery technology. It is really important to respect the temperature limits and select your battery technology based on this.
It is really important to consider any battery exposed outside, subject to a direct sun exposure will reach >50°C. As most of the temperature limits are 60°C you will face risk of capacity reduction to explosion.
I’ve made some post about Li-Po battery and temperature by the past, you can read it.
primary or secondary cells
Primary cell have the advantage of proposing a higher capacity than the secondary cells. They are also, (usually) more stable in temperature and have a storage discharge lower than secondary cells.
Basically if you are looking for really long autonomy the primary cells are a better choice. They will offer the best performance and reduce the number of human interventions. The economical equation for using secondary cells will not be interesting.
If your device will have a high current consumption and the battery life can’t be higher than 6 month, the use of secondary cells will be interesting for your user. Recharge the device every 6 months will have a lower impact for him than managing a stock of batteries. This is particularly true when you have selected expensive battery like lithium based one.
Secondary batteries will have a larger discharge when stored. This is also really related to the battery technology. You also need to consider this factor when you stock your devices for a log time before selling them. If the battery is connected or left unconnected will also have an impact on the storage discharge. This point needs to be specified and considered.
The way we can transport a device containing lithium batteries is also a key factor to consider. Lithium have a lot of restriction applied due to the risk of fire & explosion. This can be a factor of cost to sell your product. It can also be a factor of usage limitation for your client.
As a consequence, you need to list, at first, all the transportation situation your device will be in the different steps of its life to ensure you select the right technology.