fsword's blog

A blogging framework for hackers.

持续集成中的缺环-1

| Comments

我们通常说的持续集成,是指通过彻底的自动化手段,让软件在需求、设计、编码、测试这几个环节上自然流动,形成连续的迭代过程,这里的关键词是“持续”,基本技术手段就是“自动化”了

不过说起来容易做起来难,目前为止,很多公司和团队都在一定程度上做了一些努力,我们也是如此,不过在审视自己的工作时,发现有个环节似乎缺失了,这就是联调或者说系统集成。

从某种角度上看,软件开发的过程是将大功能拆成小任务,然后自底向上逐步累积的,如图

software_layer

我们的测试实际上也是循着这个基本前提进行的。众所周知,越是底层的软件单元,越容易进行覆盖性测试,但代价是需要在边界上进行模拟,也许是mock,也许是simulator,而高层的测试可以避免在边界上的这种模拟成本,代价是不能完备的覆盖所有可能的逻辑分支。

通常的做法是结合,我们区分了单元测试、模块测试、集成测试几个层次。在单元测试中,我们使用mock技术尽可能覆盖逻辑分支;而在集成测试环节,更多的是关键路径和用户视角。

但是在我们这样的软件系统中,很少有独立工作的应用系统,大部分系统都是基于某种分布式架构进行协同的,常见的情况是这样的

old way

这样的结构对我们的质量工作提出了挑战,因为我们没有对应系统间协同测试的基础设施(至少我目前没有找到),这迫使我们尝试一条以前没有走过的路。

如前所述,基本的思路还是自动化,只不过现在我们要处理多个系统的自动部署和测试,相对而言,这种场景中的测试更接近验收测试,相对不是很复杂,所以关键就在部署上,我们希望实现这样的一张图

new way

erlang环境安装FAQ

| Comments

如何添加 wx 支持

erlang的很多GUI工具都是基于wx库的,比如: reltool,但是缺省的ubuntu环境中的 erlang 包是没有wx支持的,常见错误是这样

1
2
3
4
5
6
7
8
9
10
1> reltool:start().

=ERROR REPORT==== 9-Dec-2012::15:28:51 ===
ERROR: Could not find 'wxe_driver.so' in: /home/john/software/otp/lib/erlang/lib/wx-0.99.1/priv
** exception exit: {load_driver,"No driver found"}
     in function  wxe_server:start/0 (wxe_server.erl, line 64)
         in call from wx:new/1 (wx.erl, line 99)
         in call from reltool_sys_win:do_init/1 (reltool_sys_win.erl, line 140)
         in call from reltool_sys_win:init/1 (reltool_sys_win.erl, line 130)
         in call from proc_lib:init_p_do_apply/3 (proc_lib.erl, line 227)

有经验的人一般会尝试自己编译erlang环境,但可能会发现编译时找不到wx库。实际上,ubuntu环境一般确实会安装 libwxgtk2.8-dev 这个包,但是需要添加一个 link (参考官方说明

1
2
cd /usr/include
sudo ln -sv wx-2.8/wx wx

另外,提醒一下,你可以在 configure 阶段验证是否支持 wx ,方法是看是否有下面的输出:

1
2
3
4
checking for debug build of wxWidgets... checking for wx-config... /usr/bin/wx-config
checking for wxWidgets version >= 2.8.4 (--unicode --debug=yes)... no
checking for standard build of wxWidgets... checking for wx-config... (cached) /usr/bin/wx-config
checking for wxWidgets version >= 2.8.4 (--unicode --debug=no)... yes (version 2.8.12)

模块的使用

| Comments

我们常常使用模块,一般是这样

1
2
3
4
module X
  def hello; 'hello'; end
  def world; 'world'; end
end

使用起来很简单

1
2
3
4
1.9.3p327 :006 >   include X
=> Object
1.9.3p327 :007 > hello
=> "hello"

不过,有时候我不想先要include一下才能使用,这是会遇到错误

1
2
3
4
1.9.3p327 :006 > X.hello
NoMethodError: undefined method `hello' for X:Module
    from (irb):6
    from /home/john/.rvm/rubies/ruby-1.9.3-p327-falcon/bin/irb:16:in `<main>'

这是因为使用 def 定义的方法都是 module 的实例方法,当然,就象可以在class里定义类方法一样,module也可以这么做

1
2
3
4
5
6
7
8
module X
  def self.hello; 'hello'; end
  def self.world; 'world'; end
end

# in irb
1.9.3p327 :007 > X.hello
=> "hello"

对于方法很多的 module ,我们可以少写几个self.,只要借助 extend 机制

1
2
3
4
5
6
7
8
9
10
11
12
13
module X
  extend self
  def hello; 'hello'; end
  def world; 'world'; end
end

# in irb
1.9.3p327 :007 >   X.hello
 => "hello"
1.9.3p327 :008 > include X
 => Object
1.9.3p327 :009 > hello
 => "hello"

这个 extend 的作用是把当前这个module extend出去,使实例方法同时承担类方法的职责,那么,如果我们不希望这些方法被include呢,今天看 I18n的代码,学了一招

1
2
3
4
5
6
module X
  extend Module.new{
    def hello; 'hello'; end
    def world; 'world'; end
  }
end

由于extend的不是当前module,而是一个匿名module,这块“领地”其它代码接触不到,所以其内部的方法不能被include,达到了我们的要求

1
2
3
4
5
6
7
8
1.9.3p327 :008 > X.hello
 => "hello"
1.9.3p327 :009 > include X
 => Object
1.9.3p327 :010 > hello
NameError: undefined local variable or method `hello' for main:Object
    from (irb):10
    from /home/john/.rvm/rubies/ruby-1.9.3-p327-falcon/bin/irb:16:in `<main>'

灵活运用module,可以很好的提高代码的复用性,同时保护好必要的封装,have fun !

链式调用的代码怎么写

| Comments

使用ruby,一个很有价值的地方就是发明了各种DSL,比如这样:

1
tags.delete_if{|x| x.nil?}.map{|v| v.sub /^!/,''}.delete_if{|x| x.empty?}

这种风格通常被称为链式调用,优点是代码比较连贯,去掉了多余的局部变量和赋值语句,符合人类的思维习惯。

但是也有隐含的问题——代码可读性有时候会受影响,例如这里的例子中,对数组的变换包括三个步骤,如果看的不仔细,就可能会遗漏。

按照我自己的经验,一般会改成这样:

1
2
3
4
tags.
     delete_if(&:nil?).
     map{|v| v.sub /^!/,''}.
     delete_if(&:empty?)

这样,每行都是一个独立单一的逻辑部分,可读性就有了保障。 问题并没有结束,链式调用的另外一个常见问题是对于故障的处理,使用者有时会搞不清楚到底在哪里出了错。

由于这种api一般会使用延迟计算的方式(延迟计算是个大的话题,请google),之前的环节仅仅是收集计算所需要的逻辑,真正出问题总是在最后,而那时的错误可以输出完整的计算条件,所以大部分情况下故障处理还不是很复杂。

但是如果有的api设计的不好或者出现了意料以外的问题,我们怎么办呢?

处理故障最常用的工具是日志,而链式调用不太好做的也就是日志,好在ruby的标准库提供了一个很好用的api——tap(很多人一开始看到这个函数,发现什么都不做,觉得很奇怪),我们看下面的例子

1
2
3
4
tags                       .tap{|x|logger.info "origin size: #{x.size}"    }
    .delete_if(&:nil?)     .tap{|x|logger.info "del nil    : #{x.size}"    }
    .map{|v| v.sub /^!/,''}.tap{|x|logger.info "sub        : #{x.inspect}" }
    .delete_if(&:empty?)   .tap{|x|logger.info "del empty  : #{x.size}"    }

这样的代码,既方便了连贯的表达业务逻辑,又能在必要的时候输出需要的日志,基本上就比较靠谱了,顺便在说一句,我专门调整了代码的对齐并不是简单的为了好看,如果有一天你确定这段代码不再需要日志输出,那么使用很多编辑器都支持的列编辑功能就可以一次性删除掉日志了

OAuth 2.0 and the Road to Hell 中文版

| Comments

They say the road to hell is paved with good intentions. Well, that’s OAuth 2.0. 人们常说,通往地狱的路往往都是处于好心铺设的。我想 OAuth 2.0 就是这样。

Last month I reached the painful conclusion that I can no longer be associated with the OAuth 2.0 standard. I resigned my role as lead author and editor, withdraw my name from the specification, and left the working group. Removing my name from a document I have painstakingly labored over for three years and over two dozen drafts was not easy. Deciding to move on from an effort I have led for over five years was agonizing.

上个月我做了一个痛苦的决定,彻底和 OAuth 2.0 标准断绝关系。我辞去了首席作者和编辑,从文档中删除了我的名字,并且离开了工作组。从一份你辛勤工作了三年,拥有几十份草稿的文档上删除自己的名字并不容易,决定离开一个我领导了五年的项目十分的痛苦。

There wasn’t a single problem or incident I can point to in order to explain such an extreme move. This is a case of death by a thousand cuts, and as the work was winding down, I’ve found myself reflecting more and more on what we actually accomplished. At the end, I reached the conclusion that OAuth 2.0 is a bad protocol. WS-* bad. It is bad enough that I no longer want to be associated with it. It is the biggest professional disappointment of my career.

促成我做这个极端决定的并不是某一件事情,这是一次由无数次的伤害导致的死亡。随着工作接近尾声,我越来越意识到 OAuth 2.0 是非常糟糕的协议,就像 WS-* 那些协议一样的烂,烂到我不愿意跟它有任何牵扯。这是我职业生涯中的最大遗憾。

All the hard fought compromises on the mailing list, in meetings, in special design committees, and in back channels resulted in a specification that fails to deliver its two main goals – security and interoperability. In fact, one of the compromises was to rename it from a protocol to a framework, and another to add a disclaimer that warns that the specification is unlike to produce interoperable implementations.

在邮件列表、会议、还是专家委员内部,无数次争论的结果却是,这份标准并没有达到两个最重要的目标 - 安全和互操作性。实际上,有一个(各方达成的)妥协就是将这份规范从协议变成一个框架,而另外一个妥协则是在规范上添加了一个申明——声称这份规范不用于建立交互式的实现。

When compared with OAuth 1.0, the 2.0 specification is more complex, less interoperable, less useful, more incomplete, and most importantly, less secure.

和 OAuth 1.0 相比,2.0的标准更加的复杂,缺乏互操作性,不实用,不完整,最重要的是,不安全。

To be clear, OAuth 2.0 at the hand of a developer with deep understanding of web security will likely result is a secure implementation. However, at the hands of most developers – as has been the experience from the past two years – 2.0 is likely to produce insecure implementations.

更明确的说,OAuth 2.0 在一个对安全有深入理解的开发者手里会是不错的。但是在大部分开发者手中,2.0的标准将会导致明显不安全的结果。

How did we get here? 我们是如何变成这样的?

At the core of the problem is the strong and unbridgeable conflict between the web and the enterprise worlds. The OAuth working group at the IETF started with strong web presence. But as the work dragged on (and on) past its first year, those web folks left along with every member of the original 1.0 community. The group that was left was largely all enterprise… and me.

问题的核心来源于“web”和“企业级”这两个不同世界的强大而不可逾越的冲突。IETF的OAuth工作组开始时有很强的web感,然而随着第一年的工作一再拖延,来自OAuth 1.0社区的web力量一个接一个的离开,这个工作组逐渐变得只剩下了喜欢“企业级”的成员和……我。

[翻译未完成,待续]
The web community was looking for a protocol very much in-line with 1.0, with small improvement in areas that proved lacking: simplifying signature, adding a light identity layer, addressing native applications, adding more flows to accommodate new client types, and improving security. The enterprise community was looking for a framework they can use with minimal changes to their existing systems, and for some, a new source of revenues through customization. To understand the depth of the divide – in an early meeting the web folks wanted a flow optimized for in-browser clients while the enterprise folks wanted a flow using SAML assertions.

The resulting specification is a designed-by-committee patchwork of compromises that serves mostly the enterprise. To be accurate, it doesn’t actually give the enterprise all of what they asked for directly, but it does provide for practically unlimited extensibility. It is this extensibility and required flexibility that destroyed the protocol. With very little effort, pretty much anything can be called OAuth 2.0 compliant.

Under the Hood

To understand the issues in 2.0, you need to understand the core architectural changes from 1.0: * Unbounded tokens - In 1.0, the client has to present two sets of credentials on each protected resource request, the token credentials and the client credentials. In 2.0, the client credentials are no longer used. This means that tokens are no longer bound to any particular client type or instance. This has introduced limits on the usefulness of access tokens as a form of authentication and increased the likelihood of security issues. * Bearer tokens - 2.0 got rid of all signatures and cryptography at the protocol level. Instead it relies solely on TLS. This means that 2.0 tokens are inherently less secure as specified. Any improvement in token security requires additional specifications and as the current proposals demonstrate, the group is solely focused on enterprise use cases. * Expiring tokens - 2.0 tokens can expire and must be refreshed. This is the most significant change for client developers from 1.0 as they now need to implement token state management. The reason for token expiration is to accommodate self-encoded tokens – encrypted tokens which can be authenticated by the server without a database look-up. Because such tokens are self-encoded, they cannot be revoked and therefore must be short-lived to reduce their exposure. Whatever is gained from the removal of the signature is lost twice in the introduction of the token state management requirement. * Grant types - In 2.0, authorization grants are exchanged for access tokens. Grant is an abstract concept representing the end-user approval. It can be a code received after the user clicks ‘Approve’ on an access request, or the user’s actual username and password. The original idea behind grants was to enable multiple flows. 1.0 provides a single flow which aims to accommodate multiple client types. 2.0 adds significant amount of specialization for different client type.

Indecision Making

These changes are all manageable if put together in a well-defined protocol. But as has been the nature of this working group, no issue is too small to get stuck on or leave open for each implementation to decide. Here is a very short sample of the working group’s inability to agree:

  • No required token type
  • No agreement on the goals of an HMAC-enabled token type
  • No requirement to implement token expiration
  • No guidance on token string size, or any value for that matter
  • No strict requirement for registration
  • Loose client type definition
  • Lack of clear client security properties
  • No required grant types
  • No guidance on the suitability or applicability of grant types
  • No useful support for native applications (but lots of lip service)
  • No required client authentication method
  • No limits on extensions

On the other hand, 2.0 defines 4 new registries for extensions, along with additional extension points via URIs. The result is a flood of proposed extensions. But the real issues is that the working group could not define the real security properties of the protocol. This is clearly reflected in the security consideration section which is largely an exercise of hand waving. It is barely useful to security experts as a bullet point of things to pay attention to.

In fact, the working group has also produced a 70 pages document describing the 2.0 threat model which does attempt to provide additional information but suffers from the same fundamental problem: there isn’t an actual protocol to analyze.

Reality

In the real world, Facebook is still running on draft 12 from a year and a half ago, with absolutely no reason to update their implementation. After all, an updated 2.0 client written to work with Facebook’s implementation is unlikely to be useful with any other provider and vice-versa. OAuth 2.0 offers little to none code re-usability.

What 2.0 offers is a blueprint for an authorization protocol. As defined, it is largely useless and must be profiles into a working solution – and that is the enterprise way. The WS-* way. 2.0 provides a whole new frontier to sell consulting services and integration solutions.

The web does not need yet another security framework. It needs simple, well-defined, and narrowly suited protocols that will lead to improved security and increased interoperability. OAuth 2.0 fails to accomplish anything meaningful over the protocol it seeks to replace.

To Upgrade or Not to Upgrade

Over the past few months, many asked me if they should upgrade to 2.0 or which version of the protocol I recommend they implement. I don’t have a simple answer.

If you are currently using 1.0 successfully, ignore 2.0. It offers no real value over 1.0 (I’m guessing your client developers have already figured out 1.0 signatures by now).

If you are new to this space, and consider yourself a security expert, use 2.0 after careful examination of its features. If you are not an expert, either use 1.0 or copy the 2.0 implementation of a provider you trust to get it right (Facebook’s API documents are a good place to start). 2.0 is better for large scale, but if you are running a major operation, you probably have some security experts on site to figure it all out for you.

Now What?

I’m hoping someone will take 2.0 and produce a 10 page profile that’s useful for the vast majority of web providers, ignoring the enterprise. A 2.1 that’s really 1.5. But that’s not going to happen at the IETF. That community is all about enterprise use cases and if you look at their other efforts like OpenID Connect (which too was a super simple proposal turned into almost a dozen complex specifications), they are not capable of simple.

I think the OAuth brand is in decline. This framework will live for a while, and given the lack of alternatives, it will gain widespread adoption. But we are also likely to see major security failures in the next couple of years and the slow but steady devaluation of the brand. It will be another hated protocol you are stuck with.

At the same time, I am expecting multiple new communities to come up with something else that is more in the spirit of 1.0 than 2.0, and where one use case is covered extremely well. OAuth 1.0 was all about small web startups looking to solve a well-defined problem they needed to solve fast. I honestly don’t know what use cases OAuth 2.0 is trying to solve any more.

Final Note

This is a sad conclusion to a once promising community. OAuth was the poster child of small, quick, and useful standards, produced outside standards bodies without all the process and legal overhead.

Our standards making process is broken beyond repair. This outcome is the direct result of the nature of the IETF, and the particular personalities overseeing this work. To be clear, these are not bad or incompetent individuals. On the contrary – they are all very capable, bright, and otherwise pleasant. But most of them show up to serve their corporate overlords, and it’s practically impossible for the rest of us to compete.

Bringing OAuth to the IETF was a huge mistake. Not that the alternative (WRAP) would have been a better outcome, but at least it would have taken three less years to figure that out. I stuck around as long as I could stand it, to fight for what I thought was best for the web. I had nothing personally to gain from the decisions being made. At the end, one voice in opposition can slow things down, but can’t make a difference.

I failed.

We failed.
link

Ruby中的self

| Comments

ruby-china.org 上有人问self的含义,发篇帖子解释一下
ruby里面的class关键字和def关键字的作用其实是改变上下文,这个self就是被改变的上下文中最重要的一个,按照ruby语法,遇到这样的关键字,self的含义就会变化
* 在class内部,self代表的是当前这个类本身

1
2
3
4
5
6
$ cat a.rb
class A
  puts self
end
$ ruby a.rb
A
  • 而通过def进入方法以后,(在方法内部写的)self指的是这个方法的当前调用者
1
2
3
4
5
6
7
8
9
$ cat a.rb
class A
  def x
    p self
  end
end
A.new.x
$ ruby a.rb
#<A:0x00000002705fb0>

如上所示,这次打印出来的self是类A的一个实例而不是类A本身

这两个原则非常重要,现在我们看看方法前加self是什么意思

1
2
3
4
5
6
7
8
9
10
11
$ cat a.rb
class A
  def self.x
    p self
  end
end
A.x
A.new.x
$ ruby a.rb
A
a.rb:7:in `<main>': undefined method `x' for #<A:0x00000001a83f58> (NoMethodError)

显然,这个方法看起来像是我们常说的“类方法”而不是实例方法(这个例子中,实例方法x不存在,于是抛出了异常),我们通常这么理解——

方法前的self在class内部,所以它表示类A,这样,x方法的调用者是类A自身(而不是它的实例),根据之前的原则,在def关键字内部,self表示的是这个方法的调用者,在这里就是类A自己

最后补充一下——
* ruby中的类本身也是对象,所以所谓“类方法”是个不太准确的描述,所有的方法都是属于某个对象的,这里的self.x其实是类A(再次强调:是A自己而不是A的实例)的“专有方法”,更进一步的理解要结合对eigenclass这个概念的理解

如何求素数(2)

| Comments

前一篇我们给出了筛法求素数的基本代码,但是针对比较大的数据量,运算速度难以接受。我们观察到运算资源并不饱和,浪费了CPU。简单的改进方式是变为并行计算,用多线程提高本机并行能力,如果以后有网络,还可以分布到不同机器节点上。
要做到并行计算,首先要分析算法中哪些环节可以并行,哪些步骤要顺序执行。回头看筛法的计算过程,主要工作包括两个:选择除数和用除数过滤数列。选择除数是从小到大顺序进行,不能并行,而用除数过滤数列则是各自独立,所以能够并行。
我们并行的方式来变换原来的算法。大致思路如下——

1. 生成待过滤的数列
2. 根据并行数目将数列均匀切分,得到若干子数列
3. 从小到大循环除数,对每一个除数
    3.1. 并行同时过滤上述多个子数列
    3.2. 过滤N的平方根后停止循环
4. 连接所有子数列,输出最后的结果

这么做有个问题,我们切分数列的目的是为了在过滤时充分利用并行能力,但是在若干次过滤后,子数列将陆续被过滤完毕,所以并行计算单元(线程、机器节点等)会逐渐无用,要避免这种情况,应该在每次遇到子数列过滤完毕的时候重新切分数列。
最后的代码应该是这样

执行结果:

1
2
3
4
5
$ ruby prime_benchmark.rb 
    user     system      total        real
    10000   0.050000    0.000000   0.050000 (  0.048889)
    100000  0.650000    0.000000   0.650000 (  0.649779)
    1000000 39.810000   0.050000  39.860000 ( 39.961506

在我的4核机器上改进很明显。

注意:之前的代码还有一个遗留问题,被过滤掉的数字其实不用再来做除数,而现在这个版本因为要重新切分数列,所以有机会减少这个浪费,这也是速度明显提升的原因

如何求素数(1)

| Comments

学习语言的过程是比较枯燥的,不过我们可以拿来做一些有趣的事情,在解决具体问题的过程中熟悉语言。
例如我们可以来练习一下这个问题

找出小于N所有素数

首先复习一下学校里的知识——

  • 素数(也称质数),指的是一类大于1的自然数,这些自然数有个特点,除了1和它自身,它们不能被其它的任何自然数整除。
1
2
3
举例:  
    4不是素数,因为它可以被2整除;  
    11是素数,因为除了1和它自身,它不能被其它自然数整除;
  • 判断一个数是否是素数,最直接的方法就是检查所有大于2小于它的自然数能否被它整除,更进一步,最大除数只要达到N的平方根就行了

根据上述知识,我们可以找到问题的解决思路。方法如下:

1. 将N以内的所有整数列出来
2. 标出序列中的第一个素数(比如:2),然后将后续中能够被这个素数整除的成员删除
3. 对新的序列重复执行上述步骤,循环进行,循环次数不大于N的平方根

上述解法其实是这个问题最古老(可能也是最高效)的方法——“筛法(Sieve of Eratosthenes)”,是由古希腊数学家埃拉托斯特尼发明的

根据这些知识,我们可以写出ruby版的实现

执行一下:

1
2
1.9.3p194 :008 > prime 100
 => [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97]

成功!
是不是很简单?但是且慢,算出来需要多久呢?要是几百万以内的素数,又需要多长时间呢?
写一段代码验证一下(我们计算结束后不输出,这是为了避免大量数据输出对IO的压力)

执行结果(修改了一下格式)——

1
2
3
4
5
$ ruby prime_benchmark.rb
       user      system       total        real
       10000     0.020000     0.000000   0.020000   (  0.024682)
       100000    1.410000     0.010000   1.420000   (  1.419806)
       1000000   245.320000   0.060000   245.380000 (246.231597)

从10万到100万,耗时增加了200倍!!! 检查load和CPU占用率,发现load不高,但是有一个cpu核心占用率100%,这说明cpu的计算能力没有得到平衡使用,这个问题我们下一阶段解决

关于Class/Module的讨论

| Comments

Ruby China 上有人提了一个问题:

1
2
3
4
class Test < Module; end
test = Test.new

test到底是个神马玩意?

这个问题的回答首先要看看Ruby的对象/类层次设计,如下图( <= 表示泛化,<- 表示继承):

user_instance <= UserClass < Class < Module < Object < BasicObject
                             |  |      |         |
                             |   <=====          |
                              <==================

显然,Class是Module的子类,所以一般由用户创建的class都是Class的实例,也就是Module的实例

1
2
3
4
5
6
1.9.3p194 :001 > class A; end
 => nil
1.9.3p194 :002 > A.is_a? Class
 => true
1.9.3p194 :003 > A.is_a? Module
 => true

但如果一个类继承了 Class或者Module,那么它理论上应该是Class/Module的子类,其实例才是Class/Module的实例

1
2
3
4
5
6
7
8
1.9.3p194 :001 > class A; end
 => nil
1.9.3p194 :002 > class X < Module; end
 => nil
1.9.3p194 :003 > A.new.is_a? Module
 => false
1.9.3p194 :004 > X.new.is_a? Module
 => true

这里有个不太好理解的地方,X本身是继承了Module,所以是Module的子类,但是它又是用class关键字创建的,所以它还是Class的实例,换句话说,X和X.new都是Module的实例

1
2
1.9.3p194 :005 > X.is_a? Module
 => true

当然,合理的推论是,如果用户定义一个类是继承自Class,也会出现上述的情况,不过这一点从语法上已经被排除了——ruby禁止直接继承Class,可是并不反对继承Class的父类Module,这就是导致这个问题复杂的原因

Ruby元编程读书笔记

| Comments

本文记录在《ruby 元编程》这本书中学到的一些知识点。

对象模型

class 关键字

这个关键字更象是一个作用域操作符而不是类型声明语句,它的核心任务是将代码带到类的上下文中,可以从这个角度理解 open class

模块的实现

以下面的代码为例

1
2
3
4
5
6
7
module X;end

class   A
    include X
end

class  B < A; end

实际情况下,ruby 将生成一个匿名类来封装模块 X ,在最终的继承链上,这个匿名类将在包含 X 的类 A 之上,这样,类 B 的继承链就是

1
B < A < X(shadow) < Object < Kernel(shadow) < BasicObject
* 以上细节在 superclass 这个 api 上反映不出来,不过每个类可以调用自己的 ancestors 方法看到细节
1
2
 1.9.3p194 :001 > String.ancestors
  => [String, Comparable, Object, Kernel, BasicObject]
* 多个 include 的情况
1
2
3
4
5
6
7
8
9
10
11
12
module X;end
module Y;end

class   A
    include X
    include Y
end

class  B < A; end

B.ancestors
 => [B, A, Y, X, Object, Kernel, BasicObject]