android servicemanager与binder源码分析三------如何进入内核通讯

承接上文,从getService开始,要开始走binder的通讯机制了。
首先是上文的java层 /frameworks/base/core/java/android/os/ServiceManagerNative.java:

118    public IBinder getService(String name) throws RemoteException {119        Parcel data = Parcel.obtain();120        Parcel reply = Parcel.obtain();121        data.writeInterfaceToken(IServiceManager.descriptor);122        data.writeString(name);123        mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);124        IBinder binder = reply.readStrongBinder();125        reply.recycle();126        data.recycle();127        return binder;128    }

1.建立2个Parcel数据data和reply,一个是入口数据,一个是出口数据;
2.data中写入要获取的service的name;
3.关键:走mRemote的transact函数;
4.读取出口数据;
5.回收资源,返回读取到的binder对象;

mRemote是个什么东西?看代码:

29    / 30     * Cast a Binder object into a service manager interface, generating31     * a proxy if needed.32     */33    static public IServiceManager asInterface(IBinder obj)34    {35        if (obj == null) {36            return null;37        }38        IServiceManager in =39            (IServiceManager)obj.queryLocalInterface(descriptor);40        if (in != null) {41            return in;42        }4344        return new ServiceManagerProxy(obj);45    }
110    public ServiceManagerProxy(IBinder remote) {111        mRemote = remote;112    }

/frameworks/base/core/java/android/os/ServiceManager.java:

33    private static IServiceManager getIServiceManager() {34        if (sServiceManager != null) {35            return sServiceManager;36        }3738        // Find the service manager39        sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());40        return sServiceManager;41    }

/frameworks/base/core/java/com/android/internal/os/BinderInternal.java:

83    / 84     * Return the global "context object" of the system.  This is usually85     * an implementation of IServiceManager, which you can use to find86     * other services.87     */88    public static final native IBinder getContextObject();

可以看到,是个IBinder对象,obj.queryLocalInterface(descriptor);这句话是IBinder定义的一个接口,实现在/frameworks/base/core/java/android/os/Binder.java:

248    / 249     * Use information supplied to attachInterface() to return the250     * associated IInterface if it matches the requested251     * descriptor.252     */253    public IInterface queryLocalInterface(String descriptor) {254        if (mDescriptor.equals(descriptor)) {255            return mOwner;256        }257        return null;258    }

插一句,这个descriptor是个描述符String类型的,费劲找了半天,在/frameworks/base/core/java/android/os/IServiceManager.java里面有定义:

63    static final String descriptor = "android.os.IServiceManager";

用来表示当前是ServiceManager。ServiceManagerProxy的构造传递进入的IBinder就是remote。就是BinderInternal.getContextObject()返回的。看下注释:“Return the global "context object" of the system”,是系统的上下文context。找到在native层的实现:
/frameworks/base/core/jni/android_util_Binder.cpp

899static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)900{901    sp b = ProcessState::self()->getContextObject(NULL);902    return javaObjectForIBinder(env, b);903}904

要弄清楚到底这个remote是个什么玩意儿,还要继续看ProcessState这个类,一个c++类,看头文件定义:
/frameworks/native/include/binder/ProcessState.h

37    static  sp    self();

是个单例,因为他是在native层,不是在驱动设备层,因此可以理解为每个进程一个。看看他的getContextObject干了什么吧:

85sp ProcessState::getContextObject(const sp& /*caller*/)86{87    return getStrongProxyForHandle(0);88}
179sp ProcessState::getStrongProxyForHandle(int32_t handle)180{181    sp result;182183    AutoMutex _l(mLock);184185    handle_entry* e = lookupHandleLocked(handle);186187    if (e != NULL) {188        // We need to create a new BpBinder if there isn't currently one, OR we189        // are unable to acquire a weak reference on this current one.  See comment190        // in getWeakProxyForHandle() for more info about this.191        IBinder* b = e->binder;192        if (b == NULL || !e->refs->attemptIncWeak(this)) {193            if (handle == 0) {194                // Special case for context manager...195                // The context manager is the only object for which we create196                // a BpBinder proxy without already holding a reference.197                // Perform a dummy transaction to ensure the context manager198                // is registered before we create the first local reference199                // to it (which will occur when creating the BpBinder).200                // If a local reference is created for the BpBinder when the201                // context manager is not present, the driver will fail to202                // provide a reference to the context manager, but the203                // driver API does not return status.204                //205                // Note that this is not race-free if the context manager206                // dies while this code runs.207                //208                // TODO: add a driver API to wait for context manager, or209                // stop special casing handle 0 for context manager and add210                // a driver API to get a handle to the context manager with211                // proper reference counting.212213                Parcel data;214                status_t status = IPCThreadState::self()->transact(215                        0, IBinder::PING_TRANSACTION, data, NULL, 0);216                if (status == DEAD_OBJECT)217                   return NULL;218            }219220            b = new BpBinder(handle);221            e->binder = b;222            if (b) e->refs = b->getWeakRefs();223            result = b;224        } else {225            // This little bit of nastyness is to allow us to add a primary226            // reference to the remote proxy when this team doesn't have one227            // but another team is sending the handle to us.228            result.force_set(b);229            e->refs->decWeak(this);230        }231    }232233    return result;234}

说实话,看到这里有点头大了,在看操作系统源码的时候经常是一个东西牵扯一堆的概念或其他的对象,如果你想跳过去只看梗概,有时候还真不行,需要弄明白就需要深入了解。所以我们再接再厉。
lookupHandleLocked(handle);这句是根据handle查询得到handle_entry,这个里面存储着binder。那么这个handle有点像windows下的同名东西,只是个索引用来找到内容实体。

166ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)167{168    const size_t N=mHandleToObject.size();169    if (N mHandleToObject;在头文件中有定义。一个数组用来维护handle实体集合,而这些实体里面存放的就是binder。而这些东西都是在ProcessState里面的内容,而ProcessState是个单例,每个进程都唯一有一个。那么好吧,这么理解:每个android进程都存储一个所有本进程用到的binder的数组。再继续看getStrongProxyForHandle,如果handle是0(实际传递进来的就是0,0指代的就是servicemanager),那么等于是要获得servicemanager,这个最基础的服务,这时候走了一个特殊的过程,IPCThreadState::self()->transact(0, IBinder::PING_TRANSACTION, data, NULL, 0);从字面上看,这里是ping了一下这个binder的内核设备,了解网络的都知道这个的含义。如果状态是dead,直接返回null。这些都在查询到的binder为null的情况下,是null的时候继续走,会创建一个BpBinder对象,并将其设置到handle_entry里,实际上就是维护在了之前的vector里面保存了下来,下次用直接取。最后返回的就是这个BpBinder,结合之前上文所述,这个BpBinder就是mRemote;好吧,新的问题来了,BpBinder是什么?继续看吧:/frameworks/native/include/binder/BpBinder.h

27class BpBinder : public IBinder

这一句可看出,是从IBinder对象继承下来的,再看IBinder对象:/frameworks/native/include/binder/IBinder.h就是个虚拟类,规范接口。我们在其中看到2句话:

139 virtual BBinder localBinder();
140 virtual BpBinder
remoteBinder();

回来看BpBinder里:

59 virtual BpBinder* remoteBinder();

看到这里,猜测下,IBinder用来规范所有Binder的接口,但是Binder分为两类:Server和Client,用localBinder和remoteBinder来区分,那么具体实现IBinder的子类里,你实现哪个接口那么实际上你就是那个类型。暂时先放下这部分,我们往回倒,回到java层。还记得ServiceManager的getService吧,一切其实是从这里开始的,他返回的是一个IBinder对象,其实我们现在知道就是BpBinder。然后在缓存没有的情况下进入到ServiceManagerNative.asInterface中,这里会new一个ServiceManagerProxy对象,让我们再次看看这个对象其中的getService方法,实际上调用的是这个方法:

118 public IBinder getService(String name) throws RemoteException {
119 Parcel data = Parcel.obtain();
120 Parcel reply = Parcel.obtain();
121 data.writeInterfaceToken(IServiceManager.descriptor);
122 data.writeString(name);
123 mRemote.transact(GET_SERVICE_TRANSACTION, data, reply, 0);
124 IBinder binder = reply.readStrongBinder();
125 reply.recycle();
126 data.recycle();
127 return binder;
128 }

mRemote.transact是关键,既然我们已经知道就是BpBinder了,那么往下看。/frameworks/native/libs/binder/BpBinder.cpp

159status_t BpBinder::transact(
160 uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
161{
162 // Once a binder has died, it will never come back to life.
163 if (mAlive) {
164 status_t status = IPCThreadState::self()->transact(
165 mHandle, code, data, reply, flags);
166 if (status == DEAD_OBJECT) mAlive = 0;
167 return status;
168 }
169
170 return DEAD_OBJECT;
171}

关键用到了IPCThreadState::self()->transact。那么这个IPCThreadState看起来是个线程的对象。在self中new了自己,然后在构造中:

686IPCThreadState::IPCThreadState()
687 : mProcess(ProcessState::self()),
688 mMyThreadId(gettid()),
689 mStrictModePolicy(0),
690 mLastTransactionBinderFlags(0)
691{
692 pthread_setspecific(gTLS, this);
693 clearCaller();
694 mIn.setDataCapacity(256);
695 mOut.setDataCapacity(256);
696}

将进程对象保留下来,并且保留自身当前所在的线程id。那么猜测下,这个线程对象其实是和当前的线程相关联的,为了每个线程都可访问binder。并且在上面的self方法中也有处理tls的事情,这里就不贴代码了。那么BpBinder.transact里面走的是IPCThreadState的transact:/frameworks/native/libs/binder/IPCThreadState.cpp

548status_t IPCThreadState::transact(int32_t handle,
549 uint32_t code, const Parcel& data,
550 Parcel* reply, uint32_t flags)
551{
552 status_t err = data.errorCheck();
553
554 flags |= TF_ACCEPT_FDS;
555
556 IF_LOG_TRANSACTIONS() {
557 TextOutput::Bundle _b(alog);
558 alog >>> SEND from pid %d uid %d %s", getpid(), getuid(),
565 (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
566 err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
567 }
568
569 if (err != NO_ERROR) {
570 if (reply) reply->setError(err);
571 return (mLastError = err);
572 }
573
574 if ((flags & TF_ONE_WAY) == 0) {
575 # if 0
576 if (code == 4) { // relayout
577 ALOGI(">>>>>> CALLING transaction 4");
578 } else {
579 ALOGI(">>>>>> CALLING transaction %d", code);
580 }
581 # endif
582 if (reply) {
583 err = waitForResponse(reply);
584 } else {
585 Parcel fakeReply;
586 err = waitForResponse(&fakeReply);
587 }
588 # if 0
589 if (code == 4) { // relayout
590 ALOGI("(statusBuffer);
928 tr.offsets_size = 0;
929 tr.data.ptr.offsets = 0;
930 } else {
931 return (mLastError = err);
932 }
933
934 mOut.writeInt32(cmd);
935 mOut.write(&tr, sizeof(tr));
936
937 return NO_ERROR;
938}

关键是最后2句 mOut.writeInt32(cmd);mOut.write(&tr, sizeof(tr));。干嘛呢?就是先写指令,后写binder的传输数据。然后呢?居然没有其它处理,好吧,往下看。transcat方法中往后走到waitForResponse,我们来看看:

712status_t IPCThreadState::waitForResponse(Parcel reply, status_t acquireResult)
713{
714 uint32_t cmd;
715 int32_t err;
716
717 while (1) {
718 if ((err=talkWithDriver()) ipcSetDataReference(
762 reinterpret_cast(tr.data.ptr.buffer),
763 tr.data_size,
764 reinterpret_cast(tr.data.ptr.offsets),
765 tr.offsets_size/sizeof(binder_size_t),
766 freeBuffer, this);
767 } else {
768 err = reinterpret_cast(tr.data.ptr.buffer);
769 freeBuffer(NULL,
770 reinterpret_cast(tr.data.ptr.buffer),
771 tr.data_size,
772 reinterpret_cast(tr.data.ptr.offsets),
773 tr.offsets_size/sizeof(binder_size_t), this);
774 }
775 } else {
776 freeBuffer(NULL,
777 reinterpret_cast(tr.data.ptr.buffer),
778 tr.data_size,
779 reinterpret_cast(tr.data.ptr.offsets),
780 tr.offsets_size/sizeof(binder_size_t), this);
781 continue;
782 }
783 }
784 goto finish;
785
786 default:
787 err = executeCommand(cmd);
788 if (err != NO_ERROR) goto finish;
789 break;
790 }
791 }
792
793finish:
794 if (err != NO_ERROR) {
795 if (acquireResult)
acquireResult = err;
796 if (reply) reply->setError(err);
797 mLastError = err;
798 }
799
800 return err;
801}

上来就是一个死循环,然后里面有一句talkWithDriver,后面就直接去读取mIn的内容了。那么我认为这句是关键,在将刚才写入的数据告知内核驱动,并等待完成数据处理。到底是否正确呢?看代码:/frameworks/native/libs/binder/IPCThreadState.cpp

803status_t IPCThreadState::talkWithDriver(bool doReceive)
804{
805 if (mProcess->mDriverFD = mIn.dataSize();
813
814 // We don't want to write anything if we are still reading
815 // from data left in the input buffer and the caller
816 // has requested to read the next data.
817 const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
818
819 bwr.write_size = outAvail;
820 bwr.write_buffer = (uintptr_t)mOut.data();
821
822 // This is what we'll read.
823 if (doReceive && needRead) {
824 bwr.read_size = mIn.dataCapacity();
825 bwr.read_buffer = (uintptr_t)mIn.data();
826 } else {
827 bwr.read_size = 0;
828 bwr.read_buffer = 0;
829 }
830
831 IF_LOG_COMMANDS() {
832 TextOutput::Bundle _b(alog);
833 if (outAvail != 0) {
834 alog mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
857 err = NO_ERROR;
858 else
859 err = -errno;
860# else
861 err = INVALID_OPERATION;
862# endif
863 if (mProcess->mDriverFD = NO_ERROR) {
878 if (bwr.write_consumed > 0) {
879 if (bwr.write_consumed 0) {
885 mIn.setDataSize(bwr.read_consumed);
886 mIn.setDataPosition(0);
887 }
888 IF_LOG_COMMANDS() {
889 TextOutput::Bundle _b(alog);
890 alog mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)。看到这里我想说:他妈的,终于到了。真心虐啊,这里终于看到了曙光,这么n层的调用和各种逻辑中,我们看到了和驱动交互的部分!下面真的该进入到内核驱动部分了,不过先停一下,我们总结下。

我们这篇解释了remote是什么,与binder的关系,引入了BpBinder的概念(server的binder),不过没有仔细分析,后面在适当的时候会解释他与BnBinder的关系。然后说明了线程的IPCThreadState与binder的关系,还有为了支持多线程环境下与其他进程的binder,有必要在这个对象中封装了与binder内核驱动通讯的内容,需要注意的是transact真正的传输数据。然后终于进入了内核驱动的部分。

其实总感觉还是有很多内容没有提及,我觉得目前先抓主线,我们要分析的是servicemanager以及它怎么利用binder与其他进程交互数据,其实线有很多,先沿着getService走下去,等到这条线明朗一些了,后面我们再看其他的会轻松不少。
下一篇我们再继续吧。

c++# c, java


本文来自互联网用户投稿,文章观点仅代表作者本人,不代表本站立场,不承担相关法律责任。如若转载,请注明出处。 如若内容造成侵权/违法违规/事实不符,请点击【内容举报】进行投诉反馈!

相关文章

立即
投稿

微信公众账号

微信扫一扫加关注

返回
顶部